2026-01-07 00:00:07.665081 | Job console starting 2026-01-07 00:00:07.726488 | Updating git repos 2026-01-07 00:00:07.864400 | Cloning repos into workspace 2026-01-07 00:00:08.222799 | Restoring repo states 2026-01-07 00:00:08.253896 | Merging changes 2026-01-07 00:00:08.253919 | Checking out repos 2026-01-07 00:00:08.717117 | Preparing playbooks 2026-01-07 00:00:09.763007 | Running Ansible setup 2026-01-07 00:00:17.685554 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-07 00:00:20.287041 | 2026-01-07 00:00:20.287238 | PLAY [Base pre] 2026-01-07 00:00:20.315527 | 2026-01-07 00:00:20.315744 | TASK [Setup log path fact] 2026-01-07 00:00:20.361012 | orchestrator | ok 2026-01-07 00:00:20.408591 | 2026-01-07 00:00:20.409468 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-07 00:00:20.511851 | orchestrator | ok 2026-01-07 00:00:20.545492 | 2026-01-07 00:00:20.545657 | TASK [emit-job-header : Print job information] 2026-01-07 00:00:20.615675 | # Job Information 2026-01-07 00:00:20.616069 | Ansible Version: 2.16.14 2026-01-07 00:00:20.616131 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-01-07 00:00:20.616225 | Pipeline: periodic-midnight 2026-01-07 00:00:20.616255 | Executor: 521e9411259a 2026-01-07 00:00:20.616278 | Triggered by: https://github.com/osism/testbed 2026-01-07 00:00:20.616301 | Event ID: 461ce70bf2dc497f9380b0f2b29a549d 2026-01-07 00:00:20.632870 | 2026-01-07 00:00:20.633032 | LOOP [emit-job-header : Print node information] 2026-01-07 00:00:21.023718 | orchestrator | ok: 2026-01-07 00:00:21.031017 | orchestrator | # Node Information 2026-01-07 00:00:21.031141 | orchestrator | Inventory Hostname: orchestrator 2026-01-07 00:00:21.031191 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-07 00:00:21.031218 | orchestrator | Username: zuul-testbed03 2026-01-07 00:00:21.031240 | orchestrator | Distro: Debian 12.12 2026-01-07 00:00:21.031265 | orchestrator | Provider: static-testbed 2026-01-07 00:00:21.031287 | orchestrator | Region: 2026-01-07 00:00:21.031309 | orchestrator | Label: testbed-orchestrator 2026-01-07 00:00:21.031330 | orchestrator | Product Name: OpenStack Nova 2026-01-07 00:00:21.031350 | orchestrator | Interface IP: 81.163.193.140 2026-01-07 00:00:21.067672 | 2026-01-07 00:00:21.067824 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-07 00:00:22.521292 | orchestrator -> localhost | changed 2026-01-07 00:00:22.530627 | 2026-01-07 00:00:22.530771 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-07 00:00:25.160617 | orchestrator -> localhost | changed 2026-01-07 00:00:25.178060 | 2026-01-07 00:00:25.178252 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-07 00:00:26.101948 | orchestrator -> localhost | ok 2026-01-07 00:00:26.115608 | 2026-01-07 00:00:26.115772 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-07 00:00:26.215863 | orchestrator | ok 2026-01-07 00:00:26.306740 | orchestrator | included: /var/lib/zuul/builds/a69b794a49924d19914edb2910e3f0b3/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-07 00:00:26.356585 | 2026-01-07 00:00:26.356731 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-07 00:00:29.815796 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-07 00:00:29.816040 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/a69b794a49924d19914edb2910e3f0b3/work/a69b794a49924d19914edb2910e3f0b3_id_rsa 2026-01-07 00:00:29.816080 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/a69b794a49924d19914edb2910e3f0b3/work/a69b794a49924d19914edb2910e3f0b3_id_rsa.pub 2026-01-07 00:00:29.816107 | orchestrator -> localhost | The key fingerprint is: 2026-01-07 00:00:29.816135 | orchestrator -> localhost | SHA256:Z68MajQrjt7WgJrlycDQ6QU+UebocGgfIN0sBOYy7Ss zuul-build-sshkey 2026-01-07 00:00:29.816158 | orchestrator -> localhost | The key's randomart image is: 2026-01-07 00:00:29.816240 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-07 00:00:29.816265 | orchestrator -> localhost | |o*.+o | 2026-01-07 00:00:29.816287 | orchestrator -> localhost | |+o*+o | 2026-01-07 00:00:29.816308 | orchestrator -> localhost | |==+*. | 2026-01-07 00:00:29.816329 | orchestrator -> localhost | |+** o | 2026-01-07 00:00:29.816349 | orchestrator -> localhost | |o.o= S o | 2026-01-07 00:00:29.816373 | orchestrator -> localhost | |..+.. o o . | 2026-01-07 00:00:29.816395 | orchestrator -> localhost | |EB.. + o. . | 2026-01-07 00:00:29.816415 | orchestrator -> localhost | |o.+oo +. o . | 2026-01-07 00:00:29.816436 | orchestrator -> localhost | | .oooo. o | 2026-01-07 00:00:29.816457 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-07 00:00:29.816514 | orchestrator -> localhost | ok: Runtime: 0:00:01.336232 2026-01-07 00:00:29.852364 | 2026-01-07 00:00:29.852524 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-07 00:00:29.911015 | orchestrator | ok 2026-01-07 00:00:29.961339 | orchestrator | included: /var/lib/zuul/builds/a69b794a49924d19914edb2910e3f0b3/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-07 00:00:30.000947 | 2026-01-07 00:00:30.001093 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-07 00:00:30.098489 | orchestrator | skipping: Conditional result was False 2026-01-07 00:00:30.115461 | 2026-01-07 00:00:30.115644 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-07 00:00:31.906452 | orchestrator | changed 2026-01-07 00:00:31.913626 | 2026-01-07 00:00:31.913752 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-07 00:00:32.314668 | orchestrator | ok 2026-01-07 00:00:32.348314 | 2026-01-07 00:00:32.348475 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-07 00:00:32.923078 | orchestrator | ok 2026-01-07 00:00:32.946550 | 2026-01-07 00:00:32.946712 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-07 00:00:33.514764 | orchestrator | ok 2026-01-07 00:00:33.527150 | 2026-01-07 00:00:33.527343 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-07 00:00:33.574244 | orchestrator | skipping: Conditional result was False 2026-01-07 00:00:33.583617 | 2026-01-07 00:00:33.583783 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-07 00:00:35.015075 | orchestrator -> localhost | changed 2026-01-07 00:00:35.067245 | 2026-01-07 00:00:35.067413 | TASK [add-build-sshkey : Add back temp key] 2026-01-07 00:00:36.472676 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/a69b794a49924d19914edb2910e3f0b3/work/a69b794a49924d19914edb2910e3f0b3_id_rsa (zuul-build-sshkey) 2026-01-07 00:00:36.472929 | orchestrator -> localhost | ok: Runtime: 0:00:00.069039 2026-01-07 00:00:36.480709 | 2026-01-07 00:00:36.480841 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-07 00:00:37.798740 | orchestrator | ok 2026-01-07 00:00:37.812051 | 2026-01-07 00:00:37.812221 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-07 00:00:37.856711 | orchestrator | skipping: Conditional result was False 2026-01-07 00:00:38.014548 | 2026-01-07 00:00:38.014707 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-07 00:00:39.011450 | orchestrator | ok 2026-01-07 00:00:39.059491 | 2026-01-07 00:00:39.060770 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-07 00:00:39.148134 | orchestrator | ok 2026-01-07 00:00:39.169698 | 2026-01-07 00:00:39.169859 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-07 00:00:40.666235 | orchestrator -> localhost | ok 2026-01-07 00:00:40.674339 | 2026-01-07 00:00:40.674467 | TASK [validate-host : Collect information about the host] 2026-01-07 00:00:42.622347 | orchestrator | ok 2026-01-07 00:00:42.670121 | 2026-01-07 00:00:42.670301 | TASK [validate-host : Sanitize hostname] 2026-01-07 00:00:42.854707 | orchestrator | ok 2026-01-07 00:00:42.864432 | 2026-01-07 00:00:42.864626 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-07 00:00:44.822426 | orchestrator -> localhost | changed 2026-01-07 00:00:44.829475 | 2026-01-07 00:00:44.829615 | TASK [validate-host : Collect information about zuul worker] 2026-01-07 00:00:45.509979 | orchestrator | ok 2026-01-07 00:00:45.525985 | 2026-01-07 00:00:45.526607 | TASK [validate-host : Write out all zuul information for each host] 2026-01-07 00:00:47.640582 | orchestrator -> localhost | changed 2026-01-07 00:00:47.656104 | 2026-01-07 00:00:47.656283 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-07 00:00:48.068858 | orchestrator | ok 2026-01-07 00:00:48.092767 | 2026-01-07 00:00:48.092924 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-07 00:02:13.153527 | orchestrator | changed: 2026-01-07 00:02:13.153779 | orchestrator | .d..t...... src/ 2026-01-07 00:02:13.153814 | orchestrator | .d..t...... src/github.com/ 2026-01-07 00:02:13.153839 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-07 00:02:13.153861 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-07 00:02:13.153882 | orchestrator | RedHat.yml 2026-01-07 00:02:13.237289 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-07 00:02:13.237308 | orchestrator | RedHat.yml 2026-01-07 00:02:13.237362 | orchestrator | = 2.2.0"... 2026-01-07 00:02:33.129228 | orchestrator | - Finding latest version of hashicorp/null... 2026-01-07 00:02:33.148131 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-01-07 00:02:33.293079 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-07 00:02:34.969796 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-07 00:02:35.042452 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-07 00:02:36.088725 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-07 00:02:36.153710 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-07 00:02:36.693417 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-07 00:02:36.693488 | orchestrator | 2026-01-07 00:02:36.693495 | orchestrator | Providers are signed by their developers. 2026-01-07 00:02:36.693501 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-07 00:02:36.693506 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-07 00:02:36.693513 | orchestrator | 2026-01-07 00:02:36.693518 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-07 00:02:36.693522 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-07 00:02:36.693536 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-07 00:02:36.693540 | orchestrator | you run "tofu init" in the future. 2026-01-07 00:02:36.786664 | orchestrator | 2026-01-07 00:02:36.786740 | orchestrator | OpenTofu has been successfully initialized! 2026-01-07 00:02:36.786750 | orchestrator | 2026-01-07 00:02:36.786756 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-07 00:02:36.786762 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-07 00:02:36.786766 | orchestrator | should now work. 2026-01-07 00:02:36.786770 | orchestrator | 2026-01-07 00:02:36.786774 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-07 00:02:36.786779 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-07 00:02:36.786784 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-07 00:02:37.192106 | orchestrator | Created and switched to workspace "ci"! 2026-01-07 00:02:37.192218 | orchestrator | 2026-01-07 00:02:37.192227 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-07 00:02:37.192234 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-07 00:02:37.192239 | orchestrator | for this configuration. 2026-01-07 00:02:37.334113 | orchestrator | ci.auto.tfvars 2026-01-07 00:02:37.621202 | orchestrator | default_custom.tf 2026-01-07 00:02:44.917288 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-07 00:02:45.424371 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-07 00:02:46.306251 | orchestrator | 2026-01-07 00:02:46.306337 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-07 00:02:46.306346 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-07 00:02:46.306351 | orchestrator | + create 2026-01-07 00:02:46.306356 | orchestrator | <= read (data resources) 2026-01-07 00:02:46.306360 | orchestrator | 2026-01-07 00:02:46.306365 | orchestrator | OpenTofu will perform the following actions: 2026-01-07 00:02:46.306369 | orchestrator | 2026-01-07 00:02:46.306374 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-07 00:02:46.306378 | orchestrator | # (config refers to values not yet known) 2026-01-07 00:02:46.306382 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-07 00:02:46.306386 | orchestrator | + checksum = (known after apply) 2026-01-07 00:02:46.306390 | orchestrator | + created_at = (known after apply) 2026-01-07 00:02:46.306394 | orchestrator | + file = (known after apply) 2026-01-07 00:02:46.306398 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.306424 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.306428 | orchestrator | + min_disk_gb = (known after apply) 2026-01-07 00:02:46.306432 | orchestrator | + min_ram_mb = (known after apply) 2026-01-07 00:02:46.306436 | orchestrator | + most_recent = true 2026-01-07 00:02:46.306440 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.306444 | orchestrator | + protected = (known after apply) 2026-01-07 00:02:46.306448 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.306455 | orchestrator | + schema = (known after apply) 2026-01-07 00:02:46.306459 | orchestrator | + size_bytes = (known after apply) 2026-01-07 00:02:46.306463 | orchestrator | + tags = (known after apply) 2026-01-07 00:02:46.306467 | orchestrator | + updated_at = (known after apply) 2026-01-07 00:02:46.306471 | orchestrator | } 2026-01-07 00:02:46.306475 | orchestrator | 2026-01-07 00:02:46.306479 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-07 00:02:46.306483 | orchestrator | # (config refers to values not yet known) 2026-01-07 00:02:46.306487 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-07 00:02:46.306491 | orchestrator | + checksum = (known after apply) 2026-01-07 00:02:46.306495 | orchestrator | + created_at = (known after apply) 2026-01-07 00:02:46.306499 | orchestrator | + file = (known after apply) 2026-01-07 00:02:46.306503 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.306506 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.306510 | orchestrator | + min_disk_gb = (known after apply) 2026-01-07 00:02:46.306514 | orchestrator | + min_ram_mb = (known after apply) 2026-01-07 00:02:46.306518 | orchestrator | + most_recent = true 2026-01-07 00:02:46.306522 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.306525 | orchestrator | + protected = (known after apply) 2026-01-07 00:02:46.306529 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.306533 | orchestrator | + schema = (known after apply) 2026-01-07 00:02:46.306537 | orchestrator | + size_bytes = (known after apply) 2026-01-07 00:02:46.306540 | orchestrator | + tags = (known after apply) 2026-01-07 00:02:46.306544 | orchestrator | + updated_at = (known after apply) 2026-01-07 00:02:46.306548 | orchestrator | } 2026-01-07 00:02:46.306552 | orchestrator | 2026-01-07 00:02:46.306555 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-07 00:02:46.306560 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-07 00:02:46.306564 | orchestrator | + content = (known after apply) 2026-01-07 00:02:46.306568 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-07 00:02:46.306572 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-07 00:02:46.306576 | orchestrator | + content_md5 = (known after apply) 2026-01-07 00:02:46.306579 | orchestrator | + content_sha1 = (known after apply) 2026-01-07 00:02:46.306583 | orchestrator | + content_sha256 = (known after apply) 2026-01-07 00:02:46.306587 | orchestrator | + content_sha512 = (known after apply) 2026-01-07 00:02:46.306591 | orchestrator | + directory_permission = "0777" 2026-01-07 00:02:46.306594 | orchestrator | + file_permission = "0644" 2026-01-07 00:02:46.306598 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-07 00:02:46.306602 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.306606 | orchestrator | } 2026-01-07 00:02:46.306610 | orchestrator | 2026-01-07 00:02:46.306614 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-07 00:02:46.306618 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-07 00:02:46.306621 | orchestrator | + content = (known after apply) 2026-01-07 00:02:46.306625 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-07 00:02:46.306629 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-07 00:02:46.306633 | orchestrator | + content_md5 = (known after apply) 2026-01-07 00:02:46.306637 | orchestrator | + content_sha1 = (known after apply) 2026-01-07 00:02:46.306640 | orchestrator | + content_sha256 = (known after apply) 2026-01-07 00:02:46.306644 | orchestrator | + content_sha512 = (known after apply) 2026-01-07 00:02:46.306648 | orchestrator | + directory_permission = "0777" 2026-01-07 00:02:46.306652 | orchestrator | + file_permission = "0644" 2026-01-07 00:02:46.306660 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-07 00:02:46.306664 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.306668 | orchestrator | } 2026-01-07 00:02:46.306671 | orchestrator | 2026-01-07 00:02:46.306681 | orchestrator | # local_file.inventory will be created 2026-01-07 00:02:46.306685 | orchestrator | + resource "local_file" "inventory" { 2026-01-07 00:02:46.306689 | orchestrator | + content = (known after apply) 2026-01-07 00:02:46.306692 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-07 00:02:46.306696 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-07 00:02:46.306700 | orchestrator | + content_md5 = (known after apply) 2026-01-07 00:02:46.306704 | orchestrator | + content_sha1 = (known after apply) 2026-01-07 00:02:46.306708 | orchestrator | + content_sha256 = (known after apply) 2026-01-07 00:02:46.306712 | orchestrator | + content_sha512 = (known after apply) 2026-01-07 00:02:46.306715 | orchestrator | + directory_permission = "0777" 2026-01-07 00:02:46.306719 | orchestrator | + file_permission = "0644" 2026-01-07 00:02:46.306723 | orchestrator | + filename = "inventory.ci" 2026-01-07 00:02:46.306727 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.306731 | orchestrator | } 2026-01-07 00:02:46.306734 | orchestrator | 2026-01-07 00:02:46.306738 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-07 00:02:46.306742 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-07 00:02:46.306746 | orchestrator | + content = (sensitive value) 2026-01-07 00:02:46.306750 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-07 00:02:46.306754 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-07 00:02:46.306757 | orchestrator | + content_md5 = (known after apply) 2026-01-07 00:02:46.306761 | orchestrator | + content_sha1 = (known after apply) 2026-01-07 00:02:46.306765 | orchestrator | + content_sha256 = (known after apply) 2026-01-07 00:02:46.306780 | orchestrator | + content_sha512 = (known after apply) 2026-01-07 00:02:46.306784 | orchestrator | + directory_permission = "0700" 2026-01-07 00:02:46.306788 | orchestrator | + file_permission = "0600" 2026-01-07 00:02:46.306792 | orchestrator | + filename = ".id_rsa.ci" 2026-01-07 00:02:46.306796 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.306799 | orchestrator | } 2026-01-07 00:02:46.306803 | orchestrator | 2026-01-07 00:02:46.306807 | orchestrator | # null_resource.node_semaphore will be created 2026-01-07 00:02:46.306811 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-07 00:02:46.306815 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.306819 | orchestrator | } 2026-01-07 00:02:46.306822 | orchestrator | 2026-01-07 00:02:46.306826 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-07 00:02:46.306830 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-07 00:02:46.306834 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.306838 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.306842 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.306845 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.306849 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.306853 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-07 00:02:46.306857 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.306861 | orchestrator | + size = 80 2026-01-07 00:02:46.306864 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.306868 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.306872 | orchestrator | } 2026-01-07 00:02:46.306876 | orchestrator | 2026-01-07 00:02:46.306880 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-07 00:02:46.306884 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:46.306887 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.306931 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.306935 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.306943 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.306947 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.306951 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-07 00:02:46.306955 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.306959 | orchestrator | + size = 80 2026-01-07 00:02:46.306962 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.306966 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.306970 | orchestrator | } 2026-01-07 00:02:46.306974 | orchestrator | 2026-01-07 00:02:46.306978 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-07 00:02:46.306981 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:46.306985 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.306989 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.306993 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.306997 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.307000 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.307004 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-07 00:02:46.307008 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.307012 | orchestrator | + size = 80 2026-01-07 00:02:46.307015 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.307019 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.307023 | orchestrator | } 2026-01-07 00:02:46.307027 | orchestrator | 2026-01-07 00:02:46.307031 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-07 00:02:46.307034 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:46.307038 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.307042 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.307046 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.307050 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.307053 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.307057 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-07 00:02:46.307061 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.307065 | orchestrator | + size = 80 2026-01-07 00:02:46.307069 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.307072 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.307076 | orchestrator | } 2026-01-07 00:02:46.307080 | orchestrator | 2026-01-07 00:02:46.307084 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-07 00:02:46.307087 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:46.307091 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.307095 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.307099 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.307103 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.307106 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.307113 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-07 00:02:46.307117 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.307121 | orchestrator | + size = 80 2026-01-07 00:02:46.307125 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.307129 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.307132 | orchestrator | } 2026-01-07 00:02:46.307136 | orchestrator | 2026-01-07 00:02:46.307140 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-07 00:02:46.307144 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:46.307148 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.307152 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.307155 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.307163 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.307167 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.307171 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-07 00:02:46.307174 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.307178 | orchestrator | + size = 80 2026-01-07 00:02:46.307182 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.307186 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.307190 | orchestrator | } 2026-01-07 00:02:46.307193 | orchestrator | 2026-01-07 00:02:46.307197 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-07 00:02:46.307204 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-07 00:02:46.307208 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.307212 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.307216 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.307220 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.307224 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.307227 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-07 00:02:46.307231 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.307235 | orchestrator | + size = 80 2026-01-07 00:02:46.307239 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.307242 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.307246 | orchestrator | } 2026-01-07 00:02:46.307250 | orchestrator | 2026-01-07 00:02:46.307254 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-07 00:02:46.307258 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.307262 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.307265 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.307269 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.307273 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.307277 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-07 00:02:46.307281 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.307285 | orchestrator | + size = 20 2026-01-07 00:02:46.307289 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.307292 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.307296 | orchestrator | } 2026-01-07 00:02:46.307300 | orchestrator | 2026-01-07 00:02:46.307304 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-07 00:02:46.307308 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.307311 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.307315 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.307319 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.307323 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.307326 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-07 00:02:46.307330 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.307334 | orchestrator | + size = 20 2026-01-07 00:02:46.307343 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.307347 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.307351 | orchestrator | } 2026-01-07 00:02:46.307355 | orchestrator | 2026-01-07 00:02:46.307359 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-07 00:02:46.307363 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.307367 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.307370 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.307374 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.307378 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.307382 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-07 00:02:46.307386 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.307393 | orchestrator | + size = 20 2026-01-07 00:02:46.307397 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.307400 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.307404 | orchestrator | } 2026-01-07 00:02:46.307408 | orchestrator | 2026-01-07 00:02:46.307412 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-07 00:02:46.307415 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.307419 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.307423 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.307427 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.307431 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.307434 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-07 00:02:46.307438 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.307442 | orchestrator | + size = 20 2026-01-07 00:02:46.307446 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.307449 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.307453 | orchestrator | } 2026-01-07 00:02:46.314998 | orchestrator | 2026-01-07 00:02:46.315048 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-07 00:02:46.315054 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.315059 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.315063 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.315067 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.315071 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.315075 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-07 00:02:46.315079 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.315091 | orchestrator | + size = 20 2026-01-07 00:02:46.315096 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.315099 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.315103 | orchestrator | } 2026-01-07 00:02:46.315107 | orchestrator | 2026-01-07 00:02:46.315111 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-07 00:02:46.315115 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.315119 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.315123 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.315127 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.315130 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.315134 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-07 00:02:46.315138 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.315142 | orchestrator | + size = 20 2026-01-07 00:02:46.315145 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.315149 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.315153 | orchestrator | } 2026-01-07 00:02:46.315157 | orchestrator | 2026-01-07 00:02:46.315160 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-07 00:02:46.315164 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.315168 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.315172 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.315175 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.315179 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.315183 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-07 00:02:46.315187 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.315190 | orchestrator | + size = 20 2026-01-07 00:02:46.315194 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.315198 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.315202 | orchestrator | } 2026-01-07 00:02:46.315206 | orchestrator | 2026-01-07 00:02:46.315210 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-07 00:02:46.315214 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.315227 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.315231 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.315235 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.315238 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.315242 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-07 00:02:46.315246 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.315250 | orchestrator | + size = 20 2026-01-07 00:02:46.315254 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.315257 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.315261 | orchestrator | } 2026-01-07 00:02:46.315265 | orchestrator | 2026-01-07 00:02:46.315269 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-07 00:02:46.315272 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-07 00:02:46.315276 | orchestrator | + attachment = (known after apply) 2026-01-07 00:02:46.315280 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.315284 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.315287 | orchestrator | + metadata = (known after apply) 2026-01-07 00:02:46.315291 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-07 00:02:46.315295 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.315299 | orchestrator | + size = 20 2026-01-07 00:02:46.315303 | orchestrator | + volume_retype_policy = "never" 2026-01-07 00:02:46.315306 | orchestrator | + volume_type = "ssd" 2026-01-07 00:02:46.315310 | orchestrator | } 2026-01-07 00:02:46.315319 | orchestrator | 2026-01-07 00:02:46.315323 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-07 00:02:46.315327 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-07 00:02:46.315331 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:46.315335 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:46.315338 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:46.315342 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.315346 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.315350 | orchestrator | + config_drive = true 2026-01-07 00:02:46.315353 | orchestrator | + created = (known after apply) 2026-01-07 00:02:46.315357 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:46.315361 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-07 00:02:46.315365 | orchestrator | + force_delete = false 2026-01-07 00:02:46.315369 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:46.315372 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.315376 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.315380 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:46.315384 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:46.315387 | orchestrator | + name = "testbed-manager" 2026-01-07 00:02:46.315391 | orchestrator | + power_state = "active" 2026-01-07 00:02:46.315395 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.315399 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:46.315402 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:46.315406 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:46.315410 | orchestrator | + user_data = (sensitive value) 2026-01-07 00:02:46.315414 | orchestrator | 2026-01-07 00:02:46.315417 | orchestrator | + block_device { 2026-01-07 00:02:46.315421 | orchestrator | + boot_index = 0 2026-01-07 00:02:46.315425 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:46.315433 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:46.315437 | orchestrator | + multiattach = false 2026-01-07 00:02:46.315448 | orchestrator | + source_type = "volume" 2026-01-07 00:02:46.315452 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.315459 | orchestrator | } 2026-01-07 00:02:46.315463 | orchestrator | 2026-01-07 00:02:46.315467 | orchestrator | + network { 2026-01-07 00:02:46.315471 | orchestrator | + access_network = false 2026-01-07 00:02:46.315475 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:46.315478 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:46.315482 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:46.315486 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.315490 | orchestrator | + port = (known after apply) 2026-01-07 00:02:46.315493 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.315497 | orchestrator | } 2026-01-07 00:02:46.315501 | orchestrator | } 2026-01-07 00:02:46.315505 | orchestrator | 2026-01-07 00:02:46.315509 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-07 00:02:46.315513 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:46.315516 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:46.315520 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:46.315524 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:46.315528 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.315531 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.315535 | orchestrator | + config_drive = true 2026-01-07 00:02:46.315539 | orchestrator | + created = (known after apply) 2026-01-07 00:02:46.315543 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:46.315547 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:46.315550 | orchestrator | + force_delete = false 2026-01-07 00:02:46.315554 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:46.315558 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.315562 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.315566 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:46.315569 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:46.315573 | orchestrator | + name = "testbed-node-0" 2026-01-07 00:02:46.315577 | orchestrator | + power_state = "active" 2026-01-07 00:02:46.315581 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.315585 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:46.315588 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:46.315592 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:46.315596 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:46.315600 | orchestrator | 2026-01-07 00:02:46.315604 | orchestrator | + block_device { 2026-01-07 00:02:46.315607 | orchestrator | + boot_index = 0 2026-01-07 00:02:46.315611 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:46.315615 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:46.315619 | orchestrator | + multiattach = false 2026-01-07 00:02:46.315622 | orchestrator | + source_type = "volume" 2026-01-07 00:02:46.315626 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.315630 | orchestrator | } 2026-01-07 00:02:46.315634 | orchestrator | 2026-01-07 00:02:46.315637 | orchestrator | + network { 2026-01-07 00:02:46.315641 | orchestrator | + access_network = false 2026-01-07 00:02:46.315645 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:46.315649 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:46.315653 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:46.315656 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.315660 | orchestrator | + port = (known after apply) 2026-01-07 00:02:46.315664 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.315668 | orchestrator | } 2026-01-07 00:02:46.315672 | orchestrator | } 2026-01-07 00:02:46.315675 | orchestrator | 2026-01-07 00:02:46.315679 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-07 00:02:46.315683 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:46.315687 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:46.315694 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:46.315698 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:46.315701 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.315705 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.315709 | orchestrator | + config_drive = true 2026-01-07 00:02:46.315713 | orchestrator | + created = (known after apply) 2026-01-07 00:02:46.315716 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:46.315720 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:46.315724 | orchestrator | + force_delete = false 2026-01-07 00:02:46.315728 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:46.315732 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.315735 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.315739 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:46.315743 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:46.315747 | orchestrator | + name = "testbed-node-1" 2026-01-07 00:02:46.315750 | orchestrator | + power_state = "active" 2026-01-07 00:02:46.315754 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.315758 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:46.315762 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:46.315765 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:46.315769 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:46.315773 | orchestrator | 2026-01-07 00:02:46.315777 | orchestrator | + block_device { 2026-01-07 00:02:46.315781 | orchestrator | + boot_index = 0 2026-01-07 00:02:46.315784 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:46.315788 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:46.315792 | orchestrator | + multiattach = false 2026-01-07 00:02:46.315796 | orchestrator | + source_type = "volume" 2026-01-07 00:02:46.315800 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.315803 | orchestrator | } 2026-01-07 00:02:46.315807 | orchestrator | 2026-01-07 00:02:46.315811 | orchestrator | + network { 2026-01-07 00:02:46.315815 | orchestrator | + access_network = false 2026-01-07 00:02:46.315818 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:46.315822 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:46.315826 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:46.315830 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.315834 | orchestrator | + port = (known after apply) 2026-01-07 00:02:46.315840 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.315844 | orchestrator | } 2026-01-07 00:02:46.315848 | orchestrator | } 2026-01-07 00:02:46.315852 | orchestrator | 2026-01-07 00:02:46.315856 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-07 00:02:46.315859 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:46.315863 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:46.315867 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:46.315873 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:46.315877 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.315883 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.315887 | orchestrator | + config_drive = true 2026-01-07 00:02:46.315903 | orchestrator | + created = (known after apply) 2026-01-07 00:02:46.315907 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:46.315911 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:46.315914 | orchestrator | + force_delete = false 2026-01-07 00:02:46.315918 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:46.315922 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.315926 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.315933 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:46.315937 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:46.315940 | orchestrator | + name = "testbed-node-2" 2026-01-07 00:02:46.315944 | orchestrator | + power_state = "active" 2026-01-07 00:02:46.315948 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.315952 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:46.315956 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:46.315959 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:46.315963 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:46.315967 | orchestrator | 2026-01-07 00:02:46.315971 | orchestrator | + block_device { 2026-01-07 00:02:46.315975 | orchestrator | + boot_index = 0 2026-01-07 00:02:46.315979 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:46.315982 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:46.315986 | orchestrator | + multiattach = false 2026-01-07 00:02:46.315990 | orchestrator | + source_type = "volume" 2026-01-07 00:02:46.315994 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.315998 | orchestrator | } 2026-01-07 00:02:46.316001 | orchestrator | 2026-01-07 00:02:46.316005 | orchestrator | + network { 2026-01-07 00:02:46.316009 | orchestrator | + access_network = false 2026-01-07 00:02:46.316013 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:46.316017 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:46.316021 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:46.316024 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.316028 | orchestrator | + port = (known after apply) 2026-01-07 00:02:46.316032 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.316036 | orchestrator | } 2026-01-07 00:02:46.316040 | orchestrator | } 2026-01-07 00:02:46.316043 | orchestrator | 2026-01-07 00:02:46.316047 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-07 00:02:46.316051 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:46.316055 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:46.316059 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:46.316063 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:46.316066 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.316070 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.316074 | orchestrator | + config_drive = true 2026-01-07 00:02:46.316078 | orchestrator | + created = (known after apply) 2026-01-07 00:02:46.316082 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:46.316085 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:46.316089 | orchestrator | + force_delete = false 2026-01-07 00:02:46.316093 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:46.316097 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.316101 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.316105 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:46.316108 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:46.316112 | orchestrator | + name = "testbed-node-3" 2026-01-07 00:02:46.316116 | orchestrator | + power_state = "active" 2026-01-07 00:02:46.316120 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.316124 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:46.316127 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:46.316131 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:46.316135 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:46.316139 | orchestrator | 2026-01-07 00:02:46.316143 | orchestrator | + block_device { 2026-01-07 00:02:46.316154 | orchestrator | + boot_index = 0 2026-01-07 00:02:46.316158 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:46.316162 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:46.316170 | orchestrator | + multiattach = false 2026-01-07 00:02:46.316174 | orchestrator | + source_type = "volume" 2026-01-07 00:02:46.316177 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.316181 | orchestrator | } 2026-01-07 00:02:46.316185 | orchestrator | 2026-01-07 00:02:46.316189 | orchestrator | + network { 2026-01-07 00:02:46.316193 | orchestrator | + access_network = false 2026-01-07 00:02:46.316197 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:46.316200 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:46.316204 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:46.316208 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.316212 | orchestrator | + port = (known after apply) 2026-01-07 00:02:46.316216 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.316220 | orchestrator | } 2026-01-07 00:02:46.316223 | orchestrator | } 2026-01-07 00:02:46.316227 | orchestrator | 2026-01-07 00:02:46.316231 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-07 00:02:46.316235 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:46.316239 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:46.316243 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:46.316246 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:46.316250 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.316254 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.316258 | orchestrator | + config_drive = true 2026-01-07 00:02:46.316264 | orchestrator | + created = (known after apply) 2026-01-07 00:02:46.316268 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:46.316272 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:46.316276 | orchestrator | + force_delete = false 2026-01-07 00:02:46.316279 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:46.316283 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.316287 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.316291 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:46.316294 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:46.316298 | orchestrator | + name = "testbed-node-4" 2026-01-07 00:02:46.316302 | orchestrator | + power_state = "active" 2026-01-07 00:02:46.316306 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.316309 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:46.316313 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:46.316317 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:46.316321 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:46.316325 | orchestrator | 2026-01-07 00:02:46.316328 | orchestrator | + block_device { 2026-01-07 00:02:46.316332 | orchestrator | + boot_index = 0 2026-01-07 00:02:46.316336 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:46.316340 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:46.316343 | orchestrator | + multiattach = false 2026-01-07 00:02:46.316347 | orchestrator | + source_type = "volume" 2026-01-07 00:02:46.316351 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.316355 | orchestrator | } 2026-01-07 00:02:46.316359 | orchestrator | 2026-01-07 00:02:46.316362 | orchestrator | + network { 2026-01-07 00:02:46.316366 | orchestrator | + access_network = false 2026-01-07 00:02:46.316370 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:46.316374 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:46.316377 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:46.316381 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.316385 | orchestrator | + port = (known after apply) 2026-01-07 00:02:46.316389 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.316392 | orchestrator | } 2026-01-07 00:02:46.316396 | orchestrator | } 2026-01-07 00:02:46.316404 | orchestrator | 2026-01-07 00:02:46.316408 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-07 00:02:46.316411 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-07 00:02:46.316415 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-07 00:02:46.316419 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-07 00:02:46.316423 | orchestrator | + all_metadata = (known after apply) 2026-01-07 00:02:46.316426 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.316430 | orchestrator | + availability_zone = "nova" 2026-01-07 00:02:46.316434 | orchestrator | + config_drive = true 2026-01-07 00:02:46.316438 | orchestrator | + created = (known after apply) 2026-01-07 00:02:46.316442 | orchestrator | + flavor_id = (known after apply) 2026-01-07 00:02:46.316445 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-07 00:02:46.316449 | orchestrator | + force_delete = false 2026-01-07 00:02:46.316456 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-07 00:02:46.316460 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.316464 | orchestrator | + image_id = (known after apply) 2026-01-07 00:02:46.316468 | orchestrator | + image_name = (known after apply) 2026-01-07 00:02:46.316471 | orchestrator | + key_pair = "testbed" 2026-01-07 00:02:46.316475 | orchestrator | + name = "testbed-node-5" 2026-01-07 00:02:46.316479 | orchestrator | + power_state = "active" 2026-01-07 00:02:46.316483 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.316486 | orchestrator | + security_groups = (known after apply) 2026-01-07 00:02:46.316490 | orchestrator | + stop_before_destroy = false 2026-01-07 00:02:46.316494 | orchestrator | + updated = (known after apply) 2026-01-07 00:02:46.316498 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-07 00:02:46.316502 | orchestrator | 2026-01-07 00:02:46.316505 | orchestrator | + block_device { 2026-01-07 00:02:46.316509 | orchestrator | + boot_index = 0 2026-01-07 00:02:46.316513 | orchestrator | + delete_on_termination = false 2026-01-07 00:02:46.316517 | orchestrator | + destination_type = "volume" 2026-01-07 00:02:46.316520 | orchestrator | + multiattach = false 2026-01-07 00:02:46.316524 | orchestrator | + source_type = "volume" 2026-01-07 00:02:46.316528 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.316532 | orchestrator | } 2026-01-07 00:02:46.316536 | orchestrator | 2026-01-07 00:02:46.316539 | orchestrator | + network { 2026-01-07 00:02:46.316543 | orchestrator | + access_network = false 2026-01-07 00:02:46.316547 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-07 00:02:46.316551 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-07 00:02:46.316554 | orchestrator | + mac = (known after apply) 2026-01-07 00:02:46.316558 | orchestrator | + name = (known after apply) 2026-01-07 00:02:46.316562 | orchestrator | + port = (known after apply) 2026-01-07 00:02:46.316566 | orchestrator | + uuid = (known after apply) 2026-01-07 00:02:46.316569 | orchestrator | } 2026-01-07 00:02:46.316573 | orchestrator | } 2026-01-07 00:02:46.316577 | orchestrator | 2026-01-07 00:02:46.316581 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-07 00:02:46.316585 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-07 00:02:46.316588 | orchestrator | + fingerprint = (known after apply) 2026-01-07 00:02:46.316592 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.316596 | orchestrator | + name = "testbed" 2026-01-07 00:02:46.316600 | orchestrator | + private_key = (sensitive value) 2026-01-07 00:02:46.316604 | orchestrator | + public_key = (known after apply) 2026-01-07 00:02:46.316607 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.316611 | orchestrator | + user_id = (known after apply) 2026-01-07 00:02:46.316615 | orchestrator | } 2026-01-07 00:02:46.316619 | orchestrator | 2026-01-07 00:02:46.316623 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-07 00:02:46.316626 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.316633 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.316637 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.316641 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.316645 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.316649 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.316652 | orchestrator | } 2026-01-07 00:02:46.316656 | orchestrator | 2026-01-07 00:02:46.316660 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-07 00:02:46.316666 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.316670 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.316674 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.316678 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.316682 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.316685 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.316689 | orchestrator | } 2026-01-07 00:02:46.316693 | orchestrator | 2026-01-07 00:02:46.316697 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-07 00:02:46.316701 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.316704 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.316708 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.316712 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.316716 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.316719 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.316723 | orchestrator | } 2026-01-07 00:02:46.316727 | orchestrator | 2026-01-07 00:02:46.316731 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-07 00:02:46.316735 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.316738 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.316742 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.316746 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.316750 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.316753 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.316757 | orchestrator | } 2026-01-07 00:02:46.316761 | orchestrator | 2026-01-07 00:02:46.316765 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-07 00:02:46.316769 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.316773 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.316776 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.316780 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.316787 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.316790 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.316794 | orchestrator | } 2026-01-07 00:02:46.316798 | orchestrator | 2026-01-07 00:02:46.316802 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-07 00:02:46.316806 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.316809 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.316813 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.316817 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.316821 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.316825 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.316828 | orchestrator | } 2026-01-07 00:02:46.316832 | orchestrator | 2026-01-07 00:02:46.316836 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-07 00:02:46.316840 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.316844 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.316847 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.316851 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.316855 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.316862 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.316866 | orchestrator | } 2026-01-07 00:02:46.316869 | orchestrator | 2026-01-07 00:02:46.316873 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-07 00:02:46.316877 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.316881 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.316885 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.316888 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.316902 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.316906 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.316910 | orchestrator | } 2026-01-07 00:02:46.316913 | orchestrator | 2026-01-07 00:02:46.316917 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-07 00:02:46.316921 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-07 00:02:46.316925 | orchestrator | + device = (known after apply) 2026-01-07 00:02:46.316929 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.316932 | orchestrator | + instance_id = (known after apply) 2026-01-07 00:02:46.316936 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.316940 | orchestrator | + volume_id = (known after apply) 2026-01-07 00:02:46.316944 | orchestrator | } 2026-01-07 00:02:46.316947 | orchestrator | 2026-01-07 00:02:46.316951 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-07 00:02:46.316956 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-07 00:02:46.316960 | orchestrator | + fixed_ip = (known after apply) 2026-01-07 00:02:46.316964 | orchestrator | + floating_ip = (known after apply) 2026-01-07 00:02:46.316967 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.316971 | orchestrator | + port_id = (known after apply) 2026-01-07 00:02:46.316975 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.316979 | orchestrator | } 2026-01-07 00:02:46.316982 | orchestrator | 2026-01-07 00:02:46.316986 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-07 00:02:46.316990 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-07 00:02:46.316994 | orchestrator | + address = (known after apply) 2026-01-07 00:02:46.316998 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.317001 | orchestrator | + dns_domain = (known after apply) 2026-01-07 00:02:46.317005 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:46.317009 | orchestrator | + fixed_ip = (known after apply) 2026-01-07 00:02:46.317013 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.317016 | orchestrator | + pool = "public" 2026-01-07 00:02:46.317020 | orchestrator | + port_id = (known after apply) 2026-01-07 00:02:46.317024 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.317028 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.317032 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.317035 | orchestrator | } 2026-01-07 00:02:46.317039 | orchestrator | 2026-01-07 00:02:46.317043 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-07 00:02:46.317047 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-07 00:02:46.317053 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.317057 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.317061 | orchestrator | + availability_zone_hints = [ 2026-01-07 00:02:46.317065 | orchestrator | + "nova", 2026-01-07 00:02:46.317069 | orchestrator | ] 2026-01-07 00:02:46.317072 | orchestrator | + dns_domain = (known after apply) 2026-01-07 00:02:46.317076 | orchestrator | + external = (known after apply) 2026-01-07 00:02:46.317080 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.317084 | orchestrator | + mtu = (known after apply) 2026-01-07 00:02:46.317088 | orchestrator | + name = "net-testbed-management" 2026-01-07 00:02:46.317091 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:46.317098 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:46.317102 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.317106 | orchestrator | + shared = (known after apply) 2026-01-07 00:02:46.317110 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.317113 | orchestrator | + transparent_vlan = (known after apply) 2026-01-07 00:02:46.317117 | orchestrator | 2026-01-07 00:02:46.317121 | orchestrator | + segments (known after apply) 2026-01-07 00:02:46.317125 | orchestrator | } 2026-01-07 00:02:46.317129 | orchestrator | 2026-01-07 00:02:46.317132 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-07 00:02:46.317136 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-07 00:02:46.317140 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.317144 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:46.317147 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:46.317154 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.317158 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:46.317162 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:46.317165 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:46.317169 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:46.317173 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.317177 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:46.317181 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:46.317184 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:46.317188 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:46.317192 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.317196 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:46.317199 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.317203 | orchestrator | 2026-01-07 00:02:46.317207 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.317211 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:46.317214 | orchestrator | } 2026-01-07 00:02:46.317218 | orchestrator | 2026-01-07 00:02:46.317222 | orchestrator | + binding (known after apply) 2026-01-07 00:02:46.317226 | orchestrator | 2026-01-07 00:02:46.317230 | orchestrator | + fixed_ip { 2026-01-07 00:02:46.317233 | orchestrator | + ip_address = "192.168.16.5" 2026-01-07 00:02:46.317237 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.317241 | orchestrator | } 2026-01-07 00:02:46.317245 | orchestrator | } 2026-01-07 00:02:46.317249 | orchestrator | 2026-01-07 00:02:46.317252 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-07 00:02:46.317256 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:46.317260 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.317264 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:46.317267 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:46.317271 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.317275 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:46.317279 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:46.317283 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:46.317286 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:46.317290 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.317294 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:46.317298 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:46.317301 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:46.317305 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:46.317309 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.317316 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:46.317320 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.317323 | orchestrator | 2026-01-07 00:02:46.317327 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.317331 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:46.317335 | orchestrator | } 2026-01-07 00:02:46.317338 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.317342 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:46.317346 | orchestrator | } 2026-01-07 00:02:46.317350 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.317354 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:46.317357 | orchestrator | } 2026-01-07 00:02:46.317361 | orchestrator | 2026-01-07 00:02:46.317365 | orchestrator | + binding (known after apply) 2026-01-07 00:02:46.317369 | orchestrator | 2026-01-07 00:02:46.317372 | orchestrator | + fixed_ip { 2026-01-07 00:02:46.317376 | orchestrator | + ip_address = "192.168.16.10" 2026-01-07 00:02:46.317380 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.317384 | orchestrator | } 2026-01-07 00:02:46.317388 | orchestrator | } 2026-01-07 00:02:46.317391 | orchestrator | 2026-01-07 00:02:46.317395 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-07 00:02:46.317399 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:46.317403 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.317407 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:46.317410 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:46.317414 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.317418 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:46.317422 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:46.317426 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:46.317429 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:46.317433 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.317437 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:46.317444 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:46.317448 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:46.317452 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:46.317455 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.317459 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:46.317463 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.317467 | orchestrator | 2026-01-07 00:02:46.317471 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.317474 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:46.317478 | orchestrator | } 2026-01-07 00:02:46.317482 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.317486 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:46.317489 | orchestrator | } 2026-01-07 00:02:46.317493 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.317497 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:46.317501 | orchestrator | } 2026-01-07 00:02:46.317505 | orchestrator | 2026-01-07 00:02:46.317508 | orchestrator | + binding (known after apply) 2026-01-07 00:02:46.317512 | orchestrator | 2026-01-07 00:02:46.317516 | orchestrator | + fixed_ip { 2026-01-07 00:02:46.317520 | orchestrator | + ip_address = "192.168.16.11" 2026-01-07 00:02:46.317523 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.317527 | orchestrator | } 2026-01-07 00:02:46.317531 | orchestrator | } 2026-01-07 00:02:46.317535 | orchestrator | 2026-01-07 00:02:46.317538 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-07 00:02:46.317542 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:46.317546 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.317550 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:46.317554 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:46.317557 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.317564 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:46.317568 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:46.317572 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:46.317575 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:46.317582 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.317585 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:46.317589 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:46.317593 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:46.317597 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:46.317601 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.317604 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:46.317608 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.317612 | orchestrator | 2026-01-07 00:02:46.317616 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.317620 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:46.317623 | orchestrator | } 2026-01-07 00:02:46.317627 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.317631 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:46.317635 | orchestrator | } 2026-01-07 00:02:46.317638 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.317642 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:46.317646 | orchestrator | } 2026-01-07 00:02:46.317650 | orchestrator | 2026-01-07 00:02:46.317654 | orchestrator | + binding (known after apply) 2026-01-07 00:02:46.317657 | orchestrator | 2026-01-07 00:02:46.317661 | orchestrator | + fixed_ip { 2026-01-07 00:02:46.317665 | orchestrator | + ip_address = "192.168.16.12" 2026-01-07 00:02:46.317669 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.317672 | orchestrator | } 2026-01-07 00:02:46.317676 | orchestrator | } 2026-01-07 00:02:46.317680 | orchestrator | 2026-01-07 00:02:46.317684 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-07 00:02:46.317687 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:46.317691 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.317695 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:46.317699 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:46.317703 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.317706 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:46.317710 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:46.317714 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:46.317718 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:46.317721 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.317725 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:46.317729 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:46.317733 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:46.317736 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:46.317740 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.317744 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:46.317748 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.317752 | orchestrator | 2026-01-07 00:02:46.317755 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.317759 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:46.317763 | orchestrator | } 2026-01-07 00:02:46.317767 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.317771 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:46.317774 | orchestrator | } 2026-01-07 00:02:46.317778 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.317782 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:46.317786 | orchestrator | } 2026-01-07 00:02:46.317790 | orchestrator | 2026-01-07 00:02:46.317798 | orchestrator | + binding (known after apply) 2026-01-07 00:02:46.317802 | orchestrator | 2026-01-07 00:02:46.317806 | orchestrator | + fixed_ip { 2026-01-07 00:02:46.317810 | orchestrator | + ip_address = "192.168.16.13" 2026-01-07 00:02:46.317813 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.317817 | orchestrator | } 2026-01-07 00:02:46.317821 | orchestrator | } 2026-01-07 00:02:46.317825 | orchestrator | 2026-01-07 00:02:46.317829 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-07 00:02:46.317833 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:46.317836 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.317840 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:46.317844 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:46.317848 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.317851 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:46.317855 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:46.317859 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:46.317865 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:46.317869 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.317873 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:46.317877 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:46.317881 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:46.317885 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:46.317888 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.317916 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:46.317920 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.317924 | orchestrator | 2026-01-07 00:02:46.317928 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.317931 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:46.317935 | orchestrator | } 2026-01-07 00:02:46.317939 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.317943 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:46.317947 | orchestrator | } 2026-01-07 00:02:46.317950 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.317954 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:46.317958 | orchestrator | } 2026-01-07 00:02:46.317962 | orchestrator | 2026-01-07 00:02:46.317966 | orchestrator | + binding (known after apply) 2026-01-07 00:02:46.317970 | orchestrator | 2026-01-07 00:02:46.317973 | orchestrator | + fixed_ip { 2026-01-07 00:02:46.317977 | orchestrator | + ip_address = "192.168.16.14" 2026-01-07 00:02:46.317981 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.317985 | orchestrator | } 2026-01-07 00:02:46.317989 | orchestrator | } 2026-01-07 00:02:46.317992 | orchestrator | 2026-01-07 00:02:46.317996 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-07 00:02:46.318000 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-07 00:02:46.318004 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.318008 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-07 00:02:46.318012 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-07 00:02:46.318605 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.318609 | orchestrator | + device_id = (known after apply) 2026-01-07 00:02:46.318613 | orchestrator | + device_owner = (known after apply) 2026-01-07 00:02:46.318617 | orchestrator | + dns_assignment = (known after apply) 2026-01-07 00:02:46.318621 | orchestrator | + dns_name = (known after apply) 2026-01-07 00:02:46.318625 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.318638 | orchestrator | + mac_address = (known after apply) 2026-01-07 00:02:46.318642 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:46.318646 | orchestrator | + port_security_enabled = (known after apply) 2026-01-07 00:02:46.318650 | orchestrator | + qos_policy_id = (known after apply) 2026-01-07 00:02:46.318661 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.318665 | orchestrator | + security_group_ids = (known after apply) 2026-01-07 00:02:46.318669 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.318673 | orchestrator | 2026-01-07 00:02:46.318677 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.318680 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-07 00:02:46.318684 | orchestrator | } 2026-01-07 00:02:46.318688 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.318692 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-07 00:02:46.318696 | orchestrator | } 2026-01-07 00:02:46.318700 | orchestrator | + allowed_address_pairs { 2026-01-07 00:02:46.318704 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-07 00:02:46.318707 | orchestrator | } 2026-01-07 00:02:46.318711 | orchestrator | 2026-01-07 00:02:46.318718 | orchestrator | + binding (known after apply) 2026-01-07 00:02:46.318722 | orchestrator | 2026-01-07 00:02:46.318726 | orchestrator | + fixed_ip { 2026-01-07 00:02:46.318730 | orchestrator | + ip_address = "192.168.16.15" 2026-01-07 00:02:46.318734 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.318738 | orchestrator | } 2026-01-07 00:02:46.318741 | orchestrator | } 2026-01-07 00:02:46.318745 | orchestrator | 2026-01-07 00:02:46.318749 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-07 00:02:46.318753 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-07 00:02:46.318757 | orchestrator | + force_destroy = false 2026-01-07 00:02:46.318761 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.318765 | orchestrator | + port_id = (known after apply) 2026-01-07 00:02:46.318769 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.318773 | orchestrator | + router_id = (known after apply) 2026-01-07 00:02:46.318776 | orchestrator | + subnet_id = (known after apply) 2026-01-07 00:02:46.318780 | orchestrator | } 2026-01-07 00:02:46.318784 | orchestrator | 2026-01-07 00:02:46.318788 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-07 00:02:46.318792 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-07 00:02:46.318796 | orchestrator | + admin_state_up = (known after apply) 2026-01-07 00:02:46.318799 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.318803 | orchestrator | + availability_zone_hints = [ 2026-01-07 00:02:46.318807 | orchestrator | + "nova", 2026-01-07 00:02:46.318811 | orchestrator | ] 2026-01-07 00:02:46.318815 | orchestrator | + distributed = (known after apply) 2026-01-07 00:02:46.318819 | orchestrator | + enable_snat = (known after apply) 2026-01-07 00:02:46.318823 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-07 00:02:46.318826 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-07 00:02:46.318830 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.318834 | orchestrator | + name = "testbed" 2026-01-07 00:02:46.318838 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.318842 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.318846 | orchestrator | 2026-01-07 00:02:46.318850 | orchestrator | + external_fixed_ip (known after apply) 2026-01-07 00:02:46.318854 | orchestrator | } 2026-01-07 00:02:46.318857 | orchestrator | 2026-01-07 00:02:46.318861 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-07 00:02:46.318865 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-07 00:02:46.318869 | orchestrator | + description = "ssh" 2026-01-07 00:02:46.318873 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.318877 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.318881 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.318885 | orchestrator | + port_range_max = 22 2026-01-07 00:02:46.318889 | orchestrator | + port_range_min = 22 2026-01-07 00:02:46.318904 | orchestrator | + protocol = "tcp" 2026-01-07 00:02:46.318908 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.318916 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.318927 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.318931 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:46.318935 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.318939 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.318943 | orchestrator | } 2026-01-07 00:02:46.318947 | orchestrator | 2026-01-07 00:02:46.318951 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-07 00:02:46.318955 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-07 00:02:46.318958 | orchestrator | + description = "wireguard" 2026-01-07 00:02:46.318962 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.318966 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.318970 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.318974 | orchestrator | + port_range_max = 51820 2026-01-07 00:02:46.318977 | orchestrator | + port_range_min = 51820 2026-01-07 00:02:46.318981 | orchestrator | + protocol = "udp" 2026-01-07 00:02:46.318985 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.318989 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.318993 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.318996 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:46.319000 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.319004 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.319008 | orchestrator | } 2026-01-07 00:02:46.319012 | orchestrator | 2026-01-07 00:02:46.319015 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-07 00:02:46.319019 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-07 00:02:46.319023 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.319027 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.319031 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.319034 | orchestrator | + protocol = "tcp" 2026-01-07 00:02:46.319038 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.319042 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.319046 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.319050 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-07 00:02:46.319054 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.319057 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.319061 | orchestrator | } 2026-01-07 00:02:46.319065 | orchestrator | 2026-01-07 00:02:46.319069 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-07 00:02:46.319073 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-07 00:02:46.319077 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.319081 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.319084 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.319088 | orchestrator | + protocol = "udp" 2026-01-07 00:02:46.319092 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.319096 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.319100 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.319103 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-07 00:02:46.319107 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.319111 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.319115 | orchestrator | } 2026-01-07 00:02:46.319119 | orchestrator | 2026-01-07 00:02:46.319123 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-07 00:02:46.319130 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-07 00:02:46.319134 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.319138 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.319142 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.319146 | orchestrator | + protocol = "icmp" 2026-01-07 00:02:46.319150 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.319153 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.319157 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.319161 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:46.319165 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.319169 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.319173 | orchestrator | } 2026-01-07 00:02:46.319176 | orchestrator | 2026-01-07 00:02:46.319180 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-07 00:02:46.319184 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-07 00:02:46.319188 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.319192 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.319195 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.319199 | orchestrator | + protocol = "tcp" 2026-01-07 00:02:46.319203 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.319207 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.319214 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.319217 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:46.319221 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.319225 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.319229 | orchestrator | } 2026-01-07 00:02:46.319233 | orchestrator | 2026-01-07 00:02:46.319237 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-07 00:02:46.319240 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-07 00:02:46.319244 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.319248 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.319252 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.319256 | orchestrator | + protocol = "udp" 2026-01-07 00:02:46.319263 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.319267 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.319271 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.319275 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:46.319278 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.319282 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.319286 | orchestrator | } 2026-01-07 00:02:46.319290 | orchestrator | 2026-01-07 00:02:46.319294 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-07 00:02:46.319298 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-07 00:02:46.319301 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.319308 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.319312 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.319316 | orchestrator | + protocol = "icmp" 2026-01-07 00:02:46.319320 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.319323 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.319327 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.319331 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:46.319335 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.319338 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.319347 | orchestrator | } 2026-01-07 00:02:46.319351 | orchestrator | 2026-01-07 00:02:46.319355 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-07 00:02:46.319359 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-07 00:02:46.319362 | orchestrator | + description = "vrrp" 2026-01-07 00:02:46.319366 | orchestrator | + direction = "ingress" 2026-01-07 00:02:46.319370 | orchestrator | + ethertype = "IPv4" 2026-01-07 00:02:46.319374 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.319378 | orchestrator | + protocol = "112" 2026-01-07 00:02:46.319382 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.319385 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-07 00:02:46.319389 | orchestrator | + remote_group_id = (known after apply) 2026-01-07 00:02:46.319393 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-07 00:02:46.319397 | orchestrator | + security_group_id = (known after apply) 2026-01-07 00:02:46.319401 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.319404 | orchestrator | } 2026-01-07 00:02:46.319408 | orchestrator | 2026-01-07 00:02:46.319412 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-07 00:02:46.319416 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-07 00:02:46.319420 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.319424 | orchestrator | + description = "management security group" 2026-01-07 00:02:46.319427 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.319431 | orchestrator | + name = "testbed-management" 2026-01-07 00:02:46.319435 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.319439 | orchestrator | + stateful = (known after apply) 2026-01-07 00:02:46.319443 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.319446 | orchestrator | } 2026-01-07 00:02:46.319450 | orchestrator | 2026-01-07 00:02:46.319454 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-07 00:02:46.319458 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-07 00:02:46.319462 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.319465 | orchestrator | + description = "node security group" 2026-01-07 00:02:46.319469 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.319473 | orchestrator | + name = "testbed-node" 2026-01-07 00:02:46.319477 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.319480 | orchestrator | + stateful = (known after apply) 2026-01-07 00:02:46.319484 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.319488 | orchestrator | } 2026-01-07 00:02:46.319492 | orchestrator | 2026-01-07 00:02:46.319496 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-07 00:02:46.319500 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-07 00:02:46.319503 | orchestrator | + all_tags = (known after apply) 2026-01-07 00:02:46.319507 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-07 00:02:46.319511 | orchestrator | + dns_nameservers = [ 2026-01-07 00:02:46.319515 | orchestrator | + "8.8.8.8", 2026-01-07 00:02:46.319519 | orchestrator | + "9.9.9.9", 2026-01-07 00:02:46.319523 | orchestrator | ] 2026-01-07 00:02:46.319526 | orchestrator | + enable_dhcp = true 2026-01-07 00:02:46.319530 | orchestrator | + gateway_ip = (known after apply) 2026-01-07 00:02:46.319534 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.319538 | orchestrator | + ip_version = 4 2026-01-07 00:02:46.319542 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-07 00:02:46.319545 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-07 00:02:46.319549 | orchestrator | + name = "subnet-testbed-management" 2026-01-07 00:02:46.319553 | orchestrator | + network_id = (known after apply) 2026-01-07 00:02:46.319557 | orchestrator | + no_gateway = false 2026-01-07 00:02:46.319561 | orchestrator | + region = (known after apply) 2026-01-07 00:02:46.319565 | orchestrator | + service_types = (known after apply) 2026-01-07 00:02:46.319572 | orchestrator | + tenant_id = (known after apply) 2026-01-07 00:02:46.319576 | orchestrator | 2026-01-07 00:02:46.319580 | orchestrator | + allocation_pool { 2026-01-07 00:02:46.319583 | orchestrator | + end = "192.168.31.250" 2026-01-07 00:02:46.319587 | orchestrator | + start = "192.168.31.200" 2026-01-07 00:02:46.319591 | orchestrator | } 2026-01-07 00:02:46.319595 | orchestrator | } 2026-01-07 00:02:46.319599 | orchestrator | 2026-01-07 00:02:46.319602 | orchestrator | # terraform_data.image will be created 2026-01-07 00:02:46.319606 | orchestrator | + resource "terraform_data" "image" { 2026-01-07 00:02:46.319610 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.319614 | orchestrator | + input = "Ubuntu 24.04" 2026-01-07 00:02:46.319618 | orchestrator | + output = (known after apply) 2026-01-07 00:02:46.319621 | orchestrator | } 2026-01-07 00:02:46.319625 | orchestrator | 2026-01-07 00:02:46.319629 | orchestrator | # terraform_data.image_node will be created 2026-01-07 00:02:46.319633 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-07 00:02:46.319637 | orchestrator | + id = (known after apply) 2026-01-07 00:02:46.319640 | orchestrator | + input = "Ubuntu 24.04" 2026-01-07 00:02:46.319644 | orchestrator | + output = (known after apply) 2026-01-07 00:02:46.319648 | orchestrator | } 2026-01-07 00:02:46.319652 | orchestrator | 2026-01-07 00:02:46.319656 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-07 00:02:46.319659 | orchestrator | 2026-01-07 00:02:46.319663 | orchestrator | Changes to Outputs: 2026-01-07 00:02:46.319667 | orchestrator | + manager_address = (sensitive value) 2026-01-07 00:02:46.319671 | orchestrator | + private_key = (sensitive value) 2026-01-07 00:02:46.572140 | orchestrator | terraform_data.image_node: Creating... 2026-01-07 00:02:46.572229 | orchestrator | terraform_data.image: Creating... 2026-01-07 00:02:46.572238 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=ebcd7f8a-8e37-84f6-1ffe-7048e7f1d67c] 2026-01-07 00:02:46.572245 | orchestrator | terraform_data.image: Creation complete after 0s [id=ab9d0888-7bfe-10c8-d0ec-453ebcc311d1] 2026-01-07 00:02:46.586610 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-07 00:02:46.587043 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-07 00:02:46.603486 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-07 00:02:46.603676 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-07 00:02:46.604015 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-07 00:02:46.604446 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-07 00:02:46.605742 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-07 00:02:46.606323 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-07 00:02:46.612822 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-07 00:02:46.626499 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-07 00:02:47.080386 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-07 00:02:47.088072 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-07 00:02:47.247634 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-01-07 00:02:47.251786 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-07 00:02:47.441967 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-07 00:02:47.449780 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-07 00:02:47.793976 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=e86742d7-10c5-4d73-b525-18a0581f8bd4] 2026-01-07 00:02:47.802825 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-07 00:02:50.323609 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=2778d154-06c9-4d37-b4c8-396dcdd5fdf1] 2026-01-07 00:02:50.332346 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-07 00:02:50.346303 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=382336b05a1999305ba68e97a972bbc6c4ea4fdc] 2026-01-07 00:02:50.349265 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=995dcd08-654d-4bc0-ab24-70981ba073f5] 2026-01-07 00:02:50.353425 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-07 00:02:50.359581 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=d4de0b4db7122d8cf1f91988c4d8cd833fb3b9ec] 2026-01-07 00:02:50.362618 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-07 00:02:50.364651 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=f78b2b96-168b-421a-aa15-4bebe7f5a151] 2026-01-07 00:02:50.372226 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-07 00:02:50.377417 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-07 00:02:50.389128 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=6d387afb-e7b9-4a62-89e6-97c0cffa548c] 2026-01-07 00:02:50.389599 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=82b3532f-8ed6-4997-a6d4-62047998b4b8] 2026-01-07 00:02:50.394096 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-07 00:02:50.394461 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-07 00:02:50.423187 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=0dd21d7e-182d-4e2a-b2dc-5d8af31fa2ef] 2026-01-07 00:02:50.430993 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-07 00:02:50.482462 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=c52f0d9f-ed72-456f-8893-789cce9c22ff] 2026-01-07 00:02:50.490502 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-07 00:02:50.759739 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=17558d9b-0f92-44fa-9888-3d1d3136e2b9] 2026-01-07 00:02:50.775058 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=e8953730-7f10-4622-86b0-9bd54769baab] 2026-01-07 00:02:51.235045 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=fa48f556-6683-4ace-bfd0-3266e47c9e8a] 2026-01-07 00:02:52.059352 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=2d269e50-fb6d-4eb4-9fdb-8330136f0359] 2026-01-07 00:02:52.066851 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-07 00:02:53.798498 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=12c08242-c6db-441e-a244-fd35f24986d7] 2026-01-07 00:02:53.848795 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=dbba5bc6-aee5-4aef-b91e-a976a83b6015] 2026-01-07 00:02:53.852355 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=72a16e10-25cf-4871-b1e8-6630ea9868f3] 2026-01-07 00:02:53.961126 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=57a83973-da93-4483-9b1b-3a04918c6db1] 2026-01-07 00:02:53.977841 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=5bec64c4-306d-48cb-b824-91c4511dbf67] 2026-01-07 00:02:54.114195 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=ff82c9e8-95eb-4674-9070-fbf445caa94f] 2026-01-07 00:02:54.907711 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=da825527-3ae8-49c6-a5a6-f328f56944c7] 2026-01-07 00:02:54.914318 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-07 00:02:54.915562 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-07 00:02:54.917295 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-07 00:02:55.120244 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=eeb6731c-a3ee-48c9-9626-7ed9f3c35852] 2026-01-07 00:02:55.135809 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-07 00:02:55.139270 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-07 00:02:55.141794 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-07 00:02:55.142340 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-07 00:02:55.142428 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-07 00:02:55.143013 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-07 00:02:55.145142 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-07 00:02:55.145538 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-07 00:02:55.474086 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=fe0aed29-acbd-40dc-8394-2b4acd7f17fb] 2026-01-07 00:02:55.485108 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-07 00:02:55.513170 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=5d68be11-e969-44ab-a838-32a86c76f169] 2026-01-07 00:02:55.524711 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-07 00:02:55.725856 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=56bf011e-f6d0-4649-b2ea-0bbe63a446af] 2026-01-07 00:02:55.732409 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-07 00:02:56.114639 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=9787c67e-2a76-454b-94cf-a2708f217604] 2026-01-07 00:02:56.120245 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-07 00:02:56.126333 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=649bb406-266f-4bfd-909d-1a84c0e0ab9a] 2026-01-07 00:02:56.130751 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-07 00:02:56.197803 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=c23c76e2-3b58-4fb3-9860-8e71eb9b7cd6] 2026-01-07 00:02:56.203402 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-07 00:02:56.489590 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=001e0db0-4a4c-4193-9ee5-08af97adee5f] 2026-01-07 00:02:56.497171 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-07 00:02:56.505053 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=64d6a457-63da-4afe-a072-6673cf8ff82c] 2026-01-07 00:02:56.515436 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-07 00:02:56.568123 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=672c77f6-3467-46f1-8b92-f6f388db0ebb] 2026-01-07 00:02:56.739891 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=1d390051-621b-402a-adcc-ef9660b8991d] 2026-01-07 00:02:56.748516 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=07c83d6b-fc46-4903-b079-e189af249e82] 2026-01-07 00:02:57.078346 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=bd3ac206-1800-4338-9160-fc231d97d193] 2026-01-07 00:02:57.569683 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=0f95b8a2-eb8c-4e51-8fd1-fb7974b1c84b] 2026-01-07 00:02:57.616420 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=087bf1d4-7baf-4860-aadb-082c79e04997] 2026-01-07 00:02:57.902873 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=1b96e2ad-c6c8-46b6-8753-bc28a57ef2f5] 2026-01-07 00:02:57.987046 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 3s [id=9e0a5533-5f8d-489e-94ff-38eb22ccd2ab] 2026-01-07 00:02:58.645711 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=4d761624-cea2-42d2-8e32-6c6571e13f87] 2026-01-07 00:02:58.657562 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-07 00:02:58.677474 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-07 00:02:58.683394 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-07 00:02:58.687447 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-07 00:02:58.693254 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-07 00:02:58.694452 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-07 00:02:58.704017 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-07 00:02:59.057525 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 3s [id=717ecb72-c5e0-4de4-b348-752676dd8ce6] 2026-01-07 00:03:01.716592 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=e2ca58bc-c603-418c-a2f6-7c1119b76aed] 2026-01-07 00:03:01.731103 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-07 00:03:01.732371 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-07 00:03:01.735361 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=afa197bbfd4bdca38657646b288ddaccabaf8396] 2026-01-07 00:03:01.738381 | orchestrator | local_file.inventory: Creating... 2026-01-07 00:03:01.741907 | orchestrator | local_file.inventory: Creation complete after 0s [id=59a3a08209c51c7c40861fd164ae3028a8d37e5f] 2026-01-07 00:03:03.281445 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=e2ca58bc-c603-418c-a2f6-7c1119b76aed] 2026-01-07 00:03:08.684566 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-07 00:03:08.684714 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-07 00:03:08.693718 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-07 00:03:08.694905 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-07 00:03:08.701192 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-07 00:03:08.706561 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-07 00:03:18.693033 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-07 00:03:18.693172 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-07 00:03:18.694262 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-07 00:03:18.695383 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-07 00:03:18.701627 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-07 00:03:18.706978 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-07 00:03:28.702275 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-07 00:03:28.702421 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-07 00:03:28.702439 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-07 00:03:28.702449 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-07 00:03:28.702458 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-07 00:03:28.707740 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-07 00:03:29.694907 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=902f91fd-c898-41ad-8dfa-753c3436be35] 2026-01-07 00:03:29.701170 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=57263d76-122a-4df8-981d-53a073e7c7e6] 2026-01-07 00:03:29.747385 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=30b62c7b-cc01-4dd0-a6ad-298aa7136968] 2026-01-07 00:03:29.828251 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=e8d731ca-e340-4648-8e29-627a06c2f7a1] 2026-01-07 00:03:30.371223 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=6b8a3762-82af-456b-a6e1-05e6be456f9c] 2026-01-07 00:03:38.710609 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-01-07 00:03:48.719449 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-01-07 00:03:50.463156 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 51s [id=1fbc65f3-5443-4277-be4f-edbde2b4e5dc] 2026-01-07 00:03:50.489934 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-07 00:03:50.501461 | orchestrator | null_resource.node_semaphore: Creation complete after 1s [id=6821728226127302992] 2026-01-07 00:03:50.508773 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-07 00:03:50.521070 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-07 00:03:50.526315 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-07 00:03:50.528282 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-07 00:03:50.528471 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-07 00:03:50.534188 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-07 00:03:50.536502 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-07 00:03:50.544648 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-07 00:03:50.551981 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-07 00:03:50.562275 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-07 00:03:54.049211 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=902f91fd-c898-41ad-8dfa-753c3436be35/17558d9b-0f92-44fa-9888-3d1d3136e2b9] 2026-01-07 00:03:54.050154 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=57263d76-122a-4df8-981d-53a073e7c7e6/82b3532f-8ed6-4997-a6d4-62047998b4b8] 2026-01-07 00:03:54.137325 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=57263d76-122a-4df8-981d-53a073e7c7e6/995dcd08-654d-4bc0-ab24-70981ba073f5] 2026-01-07 00:03:54.137457 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=902f91fd-c898-41ad-8dfa-753c3436be35/c52f0d9f-ed72-456f-8893-789cce9c22ff] 2026-01-07 00:03:54.167491 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=1fbc65f3-5443-4277-be4f-edbde2b4e5dc/f78b2b96-168b-421a-aa15-4bebe7f5a151] 2026-01-07 00:03:54.180115 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=1fbc65f3-5443-4277-be4f-edbde2b4e5dc/2778d154-06c9-4d37-b4c8-396dcdd5fdf1] 2026-01-07 00:04:00.261495 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=57263d76-122a-4df8-981d-53a073e7c7e6/6d387afb-e7b9-4a62-89e6-97c0cffa548c] 2026-01-07 00:04:00.290520 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=902f91fd-c898-41ad-8dfa-753c3436be35/0dd21d7e-182d-4e2a-b2dc-5d8af31fa2ef] 2026-01-07 00:04:00.309471 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=1fbc65f3-5443-4277-be4f-edbde2b4e5dc/e8953730-7f10-4622-86b0-9bd54769baab] 2026-01-07 00:04:00.564545 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-07 00:04:10.564904 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-07 00:04:11.586352 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=f87fa421-8e63-481d-9e64-adf7411d4b9e] 2026-01-07 00:04:11.606167 | orchestrator | 2026-01-07 00:04:11.606265 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-07 00:04:11.606290 | orchestrator | 2026-01-07 00:04:11.606298 | orchestrator | Outputs: 2026-01-07 00:04:11.606305 | orchestrator | 2026-01-07 00:04:11.606309 | orchestrator | manager_address = 2026-01-07 00:04:11.606329 | orchestrator | private_key = 2026-01-07 00:04:11.768999 | orchestrator | ok: Runtime: 0:01:42.631750 2026-01-07 00:04:11.799299 | 2026-01-07 00:04:11.799435 | TASK [Fetch manager address] 2026-01-07 00:04:12.355460 | orchestrator | ok 2026-01-07 00:04:12.371343 | 2026-01-07 00:04:12.371497 | TASK [Set manager_host address] 2026-01-07 00:04:12.464849 | orchestrator | ok 2026-01-07 00:04:12.472045 | 2026-01-07 00:04:12.472221 | LOOP [Update ansible collections] 2026-01-07 00:04:15.148434 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-07 00:04:15.148864 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-07 00:04:15.148924 | orchestrator | Starting galaxy collection install process 2026-01-07 00:04:15.148963 | orchestrator | Process install dependency map 2026-01-07 00:04:15.148998 | orchestrator | Starting collection install process 2026-01-07 00:04:15.149030 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-01-07 00:04:15.149066 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-01-07 00:04:15.149164 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-07 00:04:15.149260 | orchestrator | ok: Item: commons Runtime: 0:00:02.296805 2026-01-07 00:04:16.297666 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-07 00:04:16.297870 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-07 00:04:16.297934 | orchestrator | Starting galaxy collection install process 2026-01-07 00:04:16.297983 | orchestrator | Process install dependency map 2026-01-07 00:04:16.298028 | orchestrator | Starting collection install process 2026-01-07 00:04:16.298069 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-01-07 00:04:16.298205 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-01-07 00:04:16.298251 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-07 00:04:16.298318 | orchestrator | ok: Item: services Runtime: 0:00:00.848130 2026-01-07 00:04:16.318166 | 2026-01-07 00:04:16.318338 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-07 00:04:26.929636 | orchestrator | ok 2026-01-07 00:04:26.942789 | 2026-01-07 00:04:26.943163 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-07 00:05:27.000902 | orchestrator | ok 2026-01-07 00:05:27.010273 | 2026-01-07 00:05:27.010420 | TASK [Fetch manager ssh hostkey] 2026-01-07 00:05:28.587446 | orchestrator | Output suppressed because no_log was given 2026-01-07 00:05:28.601716 | 2026-01-07 00:05:28.601899 | TASK [Get ssh keypair from terraform environment] 2026-01-07 00:05:29.141036 | orchestrator | ok: Runtime: 0:00:00.009474 2026-01-07 00:05:29.157836 | 2026-01-07 00:05:29.158017 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-07 00:05:29.205204 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-07 00:05:29.214927 | 2026-01-07 00:05:29.215092 | TASK [Run manager part 0] 2026-01-07 00:05:30.418106 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-07 00:05:30.494612 | orchestrator | 2026-01-07 00:05:30.494679 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-07 00:05:30.494687 | orchestrator | 2026-01-07 00:05:30.494706 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-07 00:05:32.111406 | orchestrator | ok: [testbed-manager] 2026-01-07 00:05:32.111458 | orchestrator | 2026-01-07 00:05:32.111486 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-07 00:05:32.111498 | orchestrator | 2026-01-07 00:05:32.111509 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:05:34.162619 | orchestrator | ok: [testbed-manager] 2026-01-07 00:05:34.162690 | orchestrator | 2026-01-07 00:05:34.162699 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-07 00:05:34.892193 | orchestrator | ok: [testbed-manager] 2026-01-07 00:05:34.892252 | orchestrator | 2026-01-07 00:05:34.892260 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-07 00:05:34.934569 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:34.934618 | orchestrator | 2026-01-07 00:05:34.934629 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-07 00:05:34.964095 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:34.964166 | orchestrator | 2026-01-07 00:05:34.964182 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-07 00:05:34.993690 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:34.993734 | orchestrator | 2026-01-07 00:05:34.993740 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-07 00:05:35.023438 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:35.023480 | orchestrator | 2026-01-07 00:05:35.023487 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-07 00:05:35.055652 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:35.055709 | orchestrator | 2026-01-07 00:05:35.055721 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-07 00:05:35.088287 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:35.088336 | orchestrator | 2026-01-07 00:05:35.088349 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-07 00:05:35.122762 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:05:35.122805 | orchestrator | 2026-01-07 00:05:35.122813 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-07 00:05:35.862668 | orchestrator | changed: [testbed-manager] 2026-01-07 00:05:35.862912 | orchestrator | 2026-01-07 00:05:35.862921 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-07 00:08:35.025665 | orchestrator | changed: [testbed-manager] 2026-01-07 00:08:35.025782 | orchestrator | 2026-01-07 00:08:35.025802 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-07 00:09:51.672122 | orchestrator | changed: [testbed-manager] 2026-01-07 00:09:51.672250 | orchestrator | 2026-01-07 00:09:51.672277 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-07 00:10:13.476824 | orchestrator | changed: [testbed-manager] 2026-01-07 00:10:13.476888 | orchestrator | 2026-01-07 00:10:13.476900 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-07 00:10:23.780518 | orchestrator | changed: [testbed-manager] 2026-01-07 00:10:23.780636 | orchestrator | 2026-01-07 00:10:23.780655 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-07 00:10:23.833711 | orchestrator | ok: [testbed-manager] 2026-01-07 00:10:23.833817 | orchestrator | 2026-01-07 00:10:23.833836 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-07 00:10:24.675991 | orchestrator | ok: [testbed-manager] 2026-01-07 00:10:24.676278 | orchestrator | 2026-01-07 00:10:24.676302 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-07 00:10:25.437682 | orchestrator | changed: [testbed-manager] 2026-01-07 00:10:25.437814 | orchestrator | 2026-01-07 00:10:25.437834 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-07 00:10:31.918481 | orchestrator | changed: [testbed-manager] 2026-01-07 00:10:31.918547 | orchestrator | 2026-01-07 00:10:31.918577 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-07 00:10:39.966436 | orchestrator | changed: [testbed-manager] 2026-01-07 00:10:39.966541 | orchestrator | 2026-01-07 00:10:39.966561 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-07 00:10:44.217225 | orchestrator | changed: [testbed-manager] 2026-01-07 00:10:44.217331 | orchestrator | 2026-01-07 00:10:44.217349 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-07 00:10:46.066315 | orchestrator | changed: [testbed-manager] 2026-01-07 00:10:46.066427 | orchestrator | 2026-01-07 00:10:46.066441 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-07 00:10:47.215439 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-07 00:10:47.215501 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-07 00:10:47.215512 | orchestrator | 2026-01-07 00:10:47.215521 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-07 00:10:47.252730 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-07 00:10:47.252808 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-07 00:10:47.252817 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-07 00:10:47.252824 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-07 00:10:51.144439 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-07 00:10:51.144496 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-07 00:10:51.144502 | orchestrator | 2026-01-07 00:10:51.144507 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-07 00:10:51.748193 | orchestrator | changed: [testbed-manager] 2026-01-07 00:10:51.748301 | orchestrator | 2026-01-07 00:10:51.748316 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-07 00:14:12.709654 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-07 00:14:12.709731 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-07 00:14:12.709738 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-07 00:14:12.709743 | orchestrator | 2026-01-07 00:14:12.709748 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-07 00:14:14.987086 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-07 00:14:14.987123 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-07 00:14:14.987129 | orchestrator | 2026-01-07 00:14:14.987134 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-07 00:14:14.987138 | orchestrator | 2026-01-07 00:14:14.987143 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:14:16.368803 | orchestrator | ok: [testbed-manager] 2026-01-07 00:14:16.368846 | orchestrator | 2026-01-07 00:14:16.368855 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-07 00:14:16.416177 | orchestrator | ok: [testbed-manager] 2026-01-07 00:14:16.416212 | orchestrator | 2026-01-07 00:14:16.416219 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-07 00:14:16.486821 | orchestrator | ok: [testbed-manager] 2026-01-07 00:14:16.486862 | orchestrator | 2026-01-07 00:14:16.486870 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-07 00:14:17.252052 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:17.252096 | orchestrator | 2026-01-07 00:14:17.252105 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-07 00:14:17.959518 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:17.959558 | orchestrator | 2026-01-07 00:14:17.959566 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-07 00:14:19.306467 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-07 00:14:19.306664 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-07 00:14:19.306680 | orchestrator | 2026-01-07 00:14:19.306700 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-07 00:14:20.734691 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:20.734765 | orchestrator | 2026-01-07 00:14:20.734775 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-07 00:14:22.503209 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:14:22.503298 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-07 00:14:22.503312 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:14:22.503323 | orchestrator | 2026-01-07 00:14:22.503335 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-07 00:14:22.552289 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:14:22.552401 | orchestrator | 2026-01-07 00:14:22.552422 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-07 00:14:22.629140 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:14:22.629212 | orchestrator | 2026-01-07 00:14:22.629241 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-07 00:14:23.161268 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:23.161362 | orchestrator | 2026-01-07 00:14:23.161380 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-07 00:14:23.233008 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:14:23.233128 | orchestrator | 2026-01-07 00:14:23.233148 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-07 00:14:24.104738 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:14:24.104785 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:24.104794 | orchestrator | 2026-01-07 00:14:24.104802 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-07 00:14:24.148010 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:14:24.148049 | orchestrator | 2026-01-07 00:14:24.148057 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-07 00:14:24.185867 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:14:24.185912 | orchestrator | 2026-01-07 00:14:24.185922 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-07 00:14:24.217873 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:14:24.217913 | orchestrator | 2026-01-07 00:14:24.217923 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-07 00:14:24.282832 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:14:24.282867 | orchestrator | 2026-01-07 00:14:24.282874 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-07 00:14:24.962371 | orchestrator | ok: [testbed-manager] 2026-01-07 00:14:24.962421 | orchestrator | 2026-01-07 00:14:24.962429 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-07 00:14:24.962436 | orchestrator | 2026-01-07 00:14:24.962441 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:14:26.331536 | orchestrator | ok: [testbed-manager] 2026-01-07 00:14:26.331583 | orchestrator | 2026-01-07 00:14:26.331590 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-07 00:14:27.295211 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:27.295344 | orchestrator | 2026-01-07 00:14:27.295351 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:14:27.295357 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-07 00:14:27.295362 | orchestrator | 2026-01-07 00:14:27.629990 | orchestrator | ok: Runtime: 0:08:57.849601 2026-01-07 00:14:27.648526 | 2026-01-07 00:14:27.648685 | TASK [Point out that the log in on the manager is now possible] 2026-01-07 00:14:27.700796 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-07 00:14:27.712082 | 2026-01-07 00:14:27.712319 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-07 00:14:27.761429 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-07 00:14:27.771922 | 2026-01-07 00:14:27.772077 | TASK [Run manager part 1 + 2] 2026-01-07 00:14:28.647252 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-07 00:14:28.707696 | orchestrator | 2026-01-07 00:14:28.707761 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-07 00:14:28.707768 | orchestrator | 2026-01-07 00:14:28.707782 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:14:31.634328 | orchestrator | ok: [testbed-manager] 2026-01-07 00:14:31.634447 | orchestrator | 2026-01-07 00:14:31.634506 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-07 00:14:31.674829 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:14:31.674937 | orchestrator | 2026-01-07 00:14:31.674958 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-07 00:14:31.714093 | orchestrator | ok: [testbed-manager] 2026-01-07 00:14:31.714238 | orchestrator | 2026-01-07 00:14:31.714255 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-07 00:14:31.756177 | orchestrator | ok: [testbed-manager] 2026-01-07 00:14:31.756266 | orchestrator | 2026-01-07 00:14:31.756278 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-07 00:14:31.822064 | orchestrator | ok: [testbed-manager] 2026-01-07 00:14:31.822169 | orchestrator | 2026-01-07 00:14:31.822187 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-07 00:14:31.883138 | orchestrator | ok: [testbed-manager] 2026-01-07 00:14:31.883251 | orchestrator | 2026-01-07 00:14:31.883270 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-07 00:14:31.946861 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-07 00:14:31.946966 | orchestrator | 2026-01-07 00:14:31.946983 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-07 00:14:32.684716 | orchestrator | ok: [testbed-manager] 2026-01-07 00:14:32.684792 | orchestrator | 2026-01-07 00:14:32.684803 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-07 00:14:32.738459 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:14:32.738507 | orchestrator | 2026-01-07 00:14:32.738513 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-07 00:14:34.140828 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:34.140907 | orchestrator | 2026-01-07 00:14:34.140920 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-07 00:14:34.721041 | orchestrator | ok: [testbed-manager] 2026-01-07 00:14:34.721093 | orchestrator | 2026-01-07 00:14:34.721099 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-07 00:14:35.858745 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:35.858816 | orchestrator | 2026-01-07 00:14:35.858833 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-07 00:14:49.772492 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:49.772688 | orchestrator | 2026-01-07 00:14:49.772723 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-07 00:14:50.458384 | orchestrator | ok: [testbed-manager] 2026-01-07 00:14:50.458475 | orchestrator | 2026-01-07 00:14:50.458494 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-07 00:14:50.515370 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:14:50.515458 | orchestrator | 2026-01-07 00:14:50.515475 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-07 00:14:51.417654 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:51.417721 | orchestrator | 2026-01-07 00:14:51.417731 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-07 00:14:52.325687 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:52.325773 | orchestrator | 2026-01-07 00:14:52.325787 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-07 00:14:52.912732 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:52.912828 | orchestrator | 2026-01-07 00:14:52.912844 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-07 00:14:52.954965 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-07 00:14:52.955045 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-07 00:14:52.955052 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-07 00:14:52.955057 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-07 00:14:55.027780 | orchestrator | changed: [testbed-manager] 2026-01-07 00:14:55.027882 | orchestrator | 2026-01-07 00:14:55.027900 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-07 00:15:04.130564 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-07 00:15:04.130722 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-07 00:15:04.130742 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-07 00:15:04.130755 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-07 00:15:04.130775 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-07 00:15:04.130787 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-07 00:15:04.130798 | orchestrator | 2026-01-07 00:15:04.130810 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-07 00:15:05.220209 | orchestrator | changed: [testbed-manager] 2026-01-07 00:15:05.220310 | orchestrator | 2026-01-07 00:15:05.220326 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-07 00:15:05.267163 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:15:05.267249 | orchestrator | 2026-01-07 00:15:05.267264 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-07 00:15:08.490867 | orchestrator | changed: [testbed-manager] 2026-01-07 00:15:08.490961 | orchestrator | 2026-01-07 00:15:08.490988 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-07 00:15:08.533339 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:15:08.533405 | orchestrator | 2026-01-07 00:15:08.533420 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-07 00:16:44.432729 | orchestrator | changed: [testbed-manager] 2026-01-07 00:16:44.432945 | orchestrator | 2026-01-07 00:16:44.432969 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-07 00:16:45.539169 | orchestrator | ok: [testbed-manager] 2026-01-07 00:16:45.539220 | orchestrator | 2026-01-07 00:16:45.539231 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:16:45.539242 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-07 00:16:45.539251 | orchestrator | 2026-01-07 00:16:45.918554 | orchestrator | ok: Runtime: 0:02:17.563909 2026-01-07 00:16:45.936891 | 2026-01-07 00:16:45.937066 | TASK [Reboot manager] 2026-01-07 00:16:47.474789 | orchestrator | ok: Runtime: 0:00:00.931464 2026-01-07 00:16:47.483716 | 2026-01-07 00:16:47.483844 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-07 00:17:02.279416 | orchestrator | ok 2026-01-07 00:17:02.291207 | 2026-01-07 00:17:02.291363 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-07 00:18:02.333464 | orchestrator | ok 2026-01-07 00:18:02.343429 | 2026-01-07 00:18:02.343582 | TASK [Deploy manager + bootstrap nodes] 2026-01-07 00:18:04.806289 | orchestrator | 2026-01-07 00:18:04.806469 | orchestrator | # DEPLOY MANAGER 2026-01-07 00:18:04.806492 | orchestrator | 2026-01-07 00:18:04.806505 | orchestrator | + set -e 2026-01-07 00:18:04.806518 | orchestrator | + echo 2026-01-07 00:18:04.806531 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-07 00:18:04.806546 | orchestrator | + echo 2026-01-07 00:18:04.806593 | orchestrator | + cat /opt/manager-vars.sh 2026-01-07 00:18:04.810430 | orchestrator | export NUMBER_OF_NODES=6 2026-01-07 00:18:04.810481 | orchestrator | 2026-01-07 00:18:04.810494 | orchestrator | export CEPH_VERSION=reef 2026-01-07 00:18:04.810506 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-07 00:18:04.810517 | orchestrator | export MANAGER_VERSION=9.5.0 2026-01-07 00:18:04.810540 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-01-07 00:18:04.810551 | orchestrator | 2026-01-07 00:18:04.810567 | orchestrator | export ARA=false 2026-01-07 00:18:04.810577 | orchestrator | export DEPLOY_MODE=manager 2026-01-07 00:18:04.810594 | orchestrator | export TEMPEST=true 2026-01-07 00:18:04.810604 | orchestrator | export IS_ZUUL=true 2026-01-07 00:18:04.810614 | orchestrator | 2026-01-07 00:18:04.810657 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.241 2026-01-07 00:18:04.810668 | orchestrator | export EXTERNAL_API=false 2026-01-07 00:18:04.810678 | orchestrator | 2026-01-07 00:18:04.810718 | orchestrator | export IMAGE_USER=ubuntu 2026-01-07 00:18:04.810733 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-07 00:18:04.810743 | orchestrator | 2026-01-07 00:18:04.810753 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-07 00:18:04.810771 | orchestrator | 2026-01-07 00:18:04.810781 | orchestrator | + echo 2026-01-07 00:18:04.810793 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-07 00:18:04.812209 | orchestrator | ++ export INTERACTIVE=false 2026-01-07 00:18:04.812231 | orchestrator | ++ INTERACTIVE=false 2026-01-07 00:18:04.812268 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-07 00:18:04.812289 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-07 00:18:04.812304 | orchestrator | + source /opt/manager-vars.sh 2026-01-07 00:18:04.812317 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-07 00:18:04.812328 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-07 00:18:04.812437 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-07 00:18:04.812450 | orchestrator | ++ CEPH_VERSION=reef 2026-01-07 00:18:04.812460 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-07 00:18:04.812470 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-07 00:18:04.812480 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-07 00:18:04.812490 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-07 00:18:04.812500 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-07 00:18:04.812520 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-07 00:18:04.812533 | orchestrator | ++ export ARA=false 2026-01-07 00:18:04.812564 | orchestrator | ++ ARA=false 2026-01-07 00:18:04.812576 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-07 00:18:04.812586 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-07 00:18:04.812596 | orchestrator | ++ export TEMPEST=true 2026-01-07 00:18:04.812605 | orchestrator | ++ TEMPEST=true 2026-01-07 00:18:04.812656 | orchestrator | ++ export IS_ZUUL=true 2026-01-07 00:18:04.812668 | orchestrator | ++ IS_ZUUL=true 2026-01-07 00:18:04.812690 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.241 2026-01-07 00:18:04.812701 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.241 2026-01-07 00:18:04.812711 | orchestrator | ++ export EXTERNAL_API=false 2026-01-07 00:18:04.812721 | orchestrator | ++ EXTERNAL_API=false 2026-01-07 00:18:04.812731 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-07 00:18:04.812741 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-07 00:18:04.812754 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-07 00:18:04.812764 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-07 00:18:04.812773 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-07 00:18:04.812783 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-07 00:18:04.812794 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-07 00:18:04.879029 | orchestrator | + docker version 2026-01-07 00:18:05.218235 | orchestrator | Client: Docker Engine - Community 2026-01-07 00:18:05.218322 | orchestrator | Version: 27.5.1 2026-01-07 00:18:05.218338 | orchestrator | API version: 1.47 2026-01-07 00:18:05.218352 | orchestrator | Go version: go1.22.11 2026-01-07 00:18:05.218363 | orchestrator | Git commit: 9f9e405 2026-01-07 00:18:05.218374 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-07 00:18:05.218387 | orchestrator | OS/Arch: linux/amd64 2026-01-07 00:18:05.218398 | orchestrator | Context: default 2026-01-07 00:18:05.218409 | orchestrator | 2026-01-07 00:18:05.218421 | orchestrator | Server: Docker Engine - Community 2026-01-07 00:18:05.218432 | orchestrator | Engine: 2026-01-07 00:18:05.218443 | orchestrator | Version: 27.5.1 2026-01-07 00:18:05.218454 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-07 00:18:05.218496 | orchestrator | Go version: go1.22.11 2026-01-07 00:18:05.218508 | orchestrator | Git commit: 4c9b3b0 2026-01-07 00:18:05.218519 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-07 00:18:05.218529 | orchestrator | OS/Arch: linux/amd64 2026-01-07 00:18:05.218540 | orchestrator | Experimental: false 2026-01-07 00:18:05.218561 | orchestrator | containerd: 2026-01-07 00:18:05.218581 | orchestrator | Version: v2.2.1 2026-01-07 00:18:05.218600 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-07 00:18:05.218681 | orchestrator | runc: 2026-01-07 00:18:05.218701 | orchestrator | Version: 1.3.4 2026-01-07 00:18:05.218716 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-07 00:18:05.218732 | orchestrator | docker-init: 2026-01-07 00:18:05.218750 | orchestrator | Version: 0.19.0 2026-01-07 00:18:05.218769 | orchestrator | GitCommit: de40ad0 2026-01-07 00:18:05.221907 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-07 00:18:05.231694 | orchestrator | + set -e 2026-01-07 00:18:05.231780 | orchestrator | + source /opt/manager-vars.sh 2026-01-07 00:18:05.231802 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-07 00:18:05.231821 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-07 00:18:05.231840 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-07 00:18:05.231858 | orchestrator | ++ CEPH_VERSION=reef 2026-01-07 00:18:05.231877 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-07 00:18:05.231898 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-07 00:18:05.231916 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-07 00:18:05.232001 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-07 00:18:05.232025 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-07 00:18:05.232045 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-07 00:18:05.232063 | orchestrator | ++ export ARA=false 2026-01-07 00:18:05.232082 | orchestrator | ++ ARA=false 2026-01-07 00:18:05.232102 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-07 00:18:05.232123 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-07 00:18:05.232141 | orchestrator | ++ export TEMPEST=true 2026-01-07 00:18:05.232153 | orchestrator | ++ TEMPEST=true 2026-01-07 00:18:05.232164 | orchestrator | ++ export IS_ZUUL=true 2026-01-07 00:18:05.232173 | orchestrator | ++ IS_ZUUL=true 2026-01-07 00:18:05.232183 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.241 2026-01-07 00:18:05.232193 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.241 2026-01-07 00:18:05.232203 | orchestrator | ++ export EXTERNAL_API=false 2026-01-07 00:18:05.232212 | orchestrator | ++ EXTERNAL_API=false 2026-01-07 00:18:05.232222 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-07 00:18:05.232232 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-07 00:18:05.232241 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-07 00:18:05.232250 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-07 00:18:05.232260 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-07 00:18:05.232270 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-07 00:18:05.232280 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-07 00:18:05.232289 | orchestrator | ++ export INTERACTIVE=false 2026-01-07 00:18:05.232299 | orchestrator | ++ INTERACTIVE=false 2026-01-07 00:18:05.232309 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-07 00:18:05.232323 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-07 00:18:05.232344 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-07 00:18:05.232355 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-01-07 00:18:05.239681 | orchestrator | + set -e 2026-01-07 00:18:05.239744 | orchestrator | + VERSION=9.5.0 2026-01-07 00:18:05.239756 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-01-07 00:18:05.249830 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-07 00:18:05.249894 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-01-07 00:18:05.253425 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-01-07 00:18:05.256744 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-01-07 00:18:05.265156 | orchestrator | /opt/configuration ~ 2026-01-07 00:18:05.265237 | orchestrator | + set -e 2026-01-07 00:18:05.265259 | orchestrator | + pushd /opt/configuration 2026-01-07 00:18:05.265276 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-07 00:18:05.266725 | orchestrator | + source /opt/venv/bin/activate 2026-01-07 00:18:05.267756 | orchestrator | ++ deactivate nondestructive 2026-01-07 00:18:05.267780 | orchestrator | ++ '[' -n '' ']' 2026-01-07 00:18:05.267826 | orchestrator | ++ '[' -n '' ']' 2026-01-07 00:18:05.267856 | orchestrator | ++ hash -r 2026-01-07 00:18:05.267865 | orchestrator | ++ '[' -n '' ']' 2026-01-07 00:18:05.267872 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-07 00:18:05.267886 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-07 00:18:05.267893 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-07 00:18:05.268277 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-07 00:18:05.268289 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-07 00:18:05.268296 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-07 00:18:05.268302 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-07 00:18:05.268345 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-07 00:18:05.268365 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-07 00:18:05.268372 | orchestrator | ++ export PATH 2026-01-07 00:18:05.268379 | orchestrator | ++ '[' -n '' ']' 2026-01-07 00:18:05.268385 | orchestrator | ++ '[' -z '' ']' 2026-01-07 00:18:05.268392 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-07 00:18:05.268398 | orchestrator | ++ PS1='(venv) ' 2026-01-07 00:18:05.268407 | orchestrator | ++ export PS1 2026-01-07 00:18:05.268413 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-07 00:18:05.268420 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-07 00:18:05.268428 | orchestrator | ++ hash -r 2026-01-07 00:18:05.268435 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-01-07 00:18:06.254423 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-01-07 00:18:06.255353 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-01-07 00:18:06.256878 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-01-07 00:18:06.258393 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-01-07 00:18:06.259484 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2026-01-07 00:18:06.269615 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-01-07 00:18:06.271396 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-01-07 00:18:06.272166 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-01-07 00:18:06.273606 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-01-07 00:18:06.303801 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-01-07 00:18:06.305009 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-01-07 00:18:06.306698 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.2) 2026-01-07 00:18:06.307913 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-01-07 00:18:06.311924 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-01-07 00:18:06.525295 | orchestrator | ++ which gilt 2026-01-07 00:18:06.529220 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-01-07 00:18:06.529296 | orchestrator | + /opt/venv/bin/gilt overlay 2026-01-07 00:18:06.759868 | orchestrator | osism.cfg-generics: 2026-01-07 00:18:06.930777 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-01-07 00:18:06.930971 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-01-07 00:18:06.931004 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-01-07 00:18:06.931023 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-01-07 00:18:07.554602 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-01-07 00:18:07.567866 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-01-07 00:18:07.905434 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-01-07 00:18:07.951901 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-07 00:18:07.952013 | orchestrator | + deactivate 2026-01-07 00:18:07.952031 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-07 00:18:07.952044 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-07 00:18:07.952055 | orchestrator | + export PATH 2026-01-07 00:18:07.952067 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-07 00:18:07.952079 | orchestrator | + '[' -n '' ']' 2026-01-07 00:18:07.952092 | orchestrator | + hash -r 2026-01-07 00:18:07.952104 | orchestrator | + '[' -n '' ']' 2026-01-07 00:18:07.952115 | orchestrator | + unset VIRTUAL_ENV 2026-01-07 00:18:07.952126 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-07 00:18:07.952137 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-07 00:18:07.952149 | orchestrator | + unset -f deactivate 2026-01-07 00:18:07.952160 | orchestrator | + popd 2026-01-07 00:18:07.952171 | orchestrator | ~ 2026-01-07 00:18:07.953488 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-07 00:18:07.953517 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-07 00:18:07.954539 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-07 00:18:08.001473 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-07 00:18:08.001567 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-07 00:18:08.002268 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-07 00:18:08.048251 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-07 00:18:08.048670 | orchestrator | ++ semver 2024.2 2025.1 2026-01-07 00:18:08.102013 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-07 00:18:08.102171 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-07 00:18:08.202975 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-07 00:18:08.203209 | orchestrator | + source /opt/venv/bin/activate 2026-01-07 00:18:08.203270 | orchestrator | ++ deactivate nondestructive 2026-01-07 00:18:08.203313 | orchestrator | ++ '[' -n '' ']' 2026-01-07 00:18:08.203356 | orchestrator | ++ '[' -n '' ']' 2026-01-07 00:18:08.203376 | orchestrator | ++ hash -r 2026-01-07 00:18:08.203397 | orchestrator | ++ '[' -n '' ']' 2026-01-07 00:18:08.203416 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-07 00:18:08.203435 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-07 00:18:08.203452 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-07 00:18:08.203464 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-07 00:18:08.203476 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-07 00:18:08.203487 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-07 00:18:08.203498 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-07 00:18:08.203510 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-07 00:18:08.203544 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-07 00:18:08.203556 | orchestrator | ++ export PATH 2026-01-07 00:18:08.203567 | orchestrator | ++ '[' -n '' ']' 2026-01-07 00:18:08.203579 | orchestrator | ++ '[' -z '' ']' 2026-01-07 00:18:08.203589 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-07 00:18:08.203600 | orchestrator | ++ PS1='(venv) ' 2026-01-07 00:18:08.203611 | orchestrator | ++ export PS1 2026-01-07 00:18:08.203655 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-07 00:18:08.203667 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-07 00:18:08.203678 | orchestrator | ++ hash -r 2026-01-07 00:18:08.203694 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-07 00:18:09.289811 | orchestrator | 2026-01-07 00:18:09.289899 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-07 00:18:09.289906 | orchestrator | 2026-01-07 00:18:09.289912 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-07 00:18:09.848546 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:09.848738 | orchestrator | 2026-01-07 00:18:09.848771 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-07 00:18:10.835139 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:10.835245 | orchestrator | 2026-01-07 00:18:10.835262 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-07 00:18:10.835306 | orchestrator | 2026-01-07 00:18:10.835319 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:18:13.032002 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:13.032120 | orchestrator | 2026-01-07 00:18:13.032142 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-07 00:18:13.078961 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:13.079045 | orchestrator | 2026-01-07 00:18:13.079056 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-07 00:18:13.525604 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:13.525830 | orchestrator | 2026-01-07 00:18:13.525859 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-07 00:18:13.571312 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:18:13.571438 | orchestrator | 2026-01-07 00:18:13.571464 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-07 00:18:13.911078 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:13.911183 | orchestrator | 2026-01-07 00:18:13.911201 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-07 00:18:13.964047 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:18:13.964145 | orchestrator | 2026-01-07 00:18:13.964161 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-07 00:18:14.303157 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:14.303234 | orchestrator | 2026-01-07 00:18:14.303241 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-07 00:18:14.433486 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:18:14.433546 | orchestrator | 2026-01-07 00:18:14.433556 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-07 00:18:14.433564 | orchestrator | 2026-01-07 00:18:14.433571 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:18:16.147760 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:16.147870 | orchestrator | 2026-01-07 00:18:16.147888 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-07 00:18:16.250965 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-07 00:18:16.251056 | orchestrator | 2026-01-07 00:18:16.251065 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-07 00:18:16.308158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-07 00:18:16.308231 | orchestrator | 2026-01-07 00:18:16.308239 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-07 00:18:17.350666 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-07 00:18:17.350778 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-07 00:18:17.350797 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-07 00:18:17.350809 | orchestrator | 2026-01-07 00:18:17.350822 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-07 00:18:19.162745 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-07 00:18:19.162852 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-07 00:18:19.162868 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-07 00:18:19.162881 | orchestrator | 2026-01-07 00:18:19.162894 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-07 00:18:19.792223 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:18:19.792331 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:19.792346 | orchestrator | 2026-01-07 00:18:19.792358 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-07 00:18:20.418250 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:18:20.418353 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:20.418370 | orchestrator | 2026-01-07 00:18:20.418384 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-07 00:18:20.479924 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:18:20.480016 | orchestrator | 2026-01-07 00:18:20.480033 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-07 00:18:20.829405 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:20.829534 | orchestrator | 2026-01-07 00:18:20.829561 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-07 00:18:20.893880 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-07 00:18:20.893957 | orchestrator | 2026-01-07 00:18:20.893971 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-07 00:18:21.948918 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:21.949052 | orchestrator | 2026-01-07 00:18:21.949072 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-07 00:18:22.792472 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:22.792609 | orchestrator | 2026-01-07 00:18:22.792708 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-07 00:18:33.296621 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:33.296772 | orchestrator | 2026-01-07 00:18:33.296806 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-07 00:18:33.336620 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:18:33.336738 | orchestrator | 2026-01-07 00:18:33.336751 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-07 00:18:33.336762 | orchestrator | 2026-01-07 00:18:33.336771 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:18:35.040396 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:35.040532 | orchestrator | 2026-01-07 00:18:35.040550 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-07 00:18:35.161295 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-07 00:18:35.161421 | orchestrator | 2026-01-07 00:18:35.161436 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-07 00:18:35.223988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:18:35.224133 | orchestrator | 2026-01-07 00:18:35.224156 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-07 00:18:37.737574 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:37.737891 | orchestrator | 2026-01-07 00:18:37.737920 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-07 00:18:37.788266 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:37.788356 | orchestrator | 2026-01-07 00:18:37.788370 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-07 00:18:37.905696 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-07 00:18:37.905786 | orchestrator | 2026-01-07 00:18:37.905794 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-07 00:18:40.699243 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-07 00:18:40.699368 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-07 00:18:40.699385 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-07 00:18:40.699397 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-07 00:18:40.699408 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-07 00:18:40.699419 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-07 00:18:40.699430 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-07 00:18:40.699442 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-07 00:18:40.699453 | orchestrator | 2026-01-07 00:18:40.699467 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-07 00:18:41.312774 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:41.312898 | orchestrator | 2026-01-07 00:18:41.312926 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-07 00:18:41.906891 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:41.906975 | orchestrator | 2026-01-07 00:18:41.906987 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-07 00:18:41.981201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-07 00:18:41.981327 | orchestrator | 2026-01-07 00:18:41.981347 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-07 00:18:43.197268 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-07 00:18:43.197434 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-07 00:18:43.197458 | orchestrator | 2026-01-07 00:18:43.197471 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-07 00:18:43.816993 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:43.817099 | orchestrator | 2026-01-07 00:18:43.817116 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-07 00:18:43.868884 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:18:43.868988 | orchestrator | 2026-01-07 00:18:43.869005 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-07 00:18:43.945577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-07 00:18:43.945696 | orchestrator | 2026-01-07 00:18:43.945714 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-07 00:18:44.553286 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:44.553407 | orchestrator | 2026-01-07 00:18:44.553441 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-07 00:18:44.612182 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-07 00:18:44.612281 | orchestrator | 2026-01-07 00:18:44.612296 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-07 00:18:45.926510 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:18:45.926614 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:18:45.926683 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:45.926698 | orchestrator | 2026-01-07 00:18:45.926709 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-07 00:18:46.553959 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:46.554602 | orchestrator | 2026-01-07 00:18:46.554675 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-07 00:18:46.613793 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:18:46.613937 | orchestrator | 2026-01-07 00:18:46.614000 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-07 00:18:46.706349 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-07 00:18:46.706472 | orchestrator | 2026-01-07 00:18:46.706485 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-07 00:18:47.228733 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:47.228808 | orchestrator | 2026-01-07 00:18:47.228816 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-07 00:18:47.641715 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:47.641835 | orchestrator | 2026-01-07 00:18:47.641854 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-07 00:18:48.856809 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-07 00:18:48.856914 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-07 00:18:48.856931 | orchestrator | 2026-01-07 00:18:48.856964 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-07 00:18:49.471627 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:49.471822 | orchestrator | 2026-01-07 00:18:49.471840 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-07 00:18:49.848753 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:49.848852 | orchestrator | 2026-01-07 00:18:49.848865 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-07 00:18:50.200917 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:50.201022 | orchestrator | 2026-01-07 00:18:50.201039 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-07 00:18:50.244358 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:18:50.244456 | orchestrator | 2026-01-07 00:18:50.244470 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-07 00:18:50.311183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-07 00:18:50.311290 | orchestrator | 2026-01-07 00:18:50.311329 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-07 00:18:50.360170 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:50.360247 | orchestrator | 2026-01-07 00:18:50.360255 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-07 00:18:52.262751 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-07 00:18:52.262880 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-07 00:18:52.262898 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-07 00:18:52.262909 | orchestrator | 2026-01-07 00:18:52.262923 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-07 00:18:52.977862 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:52.977981 | orchestrator | 2026-01-07 00:18:52.978000 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-07 00:18:53.687207 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:53.687314 | orchestrator | 2026-01-07 00:18:53.687331 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-07 00:18:54.374759 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:54.374867 | orchestrator | 2026-01-07 00:18:54.374888 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-07 00:18:54.449108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-07 00:18:54.449213 | orchestrator | 2026-01-07 00:18:54.449230 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-07 00:18:54.492965 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:54.493058 | orchestrator | 2026-01-07 00:18:54.493074 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-07 00:18:55.198358 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-07 00:18:55.198442 | orchestrator | 2026-01-07 00:18:55.198453 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-07 00:18:55.278275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-07 00:18:55.278374 | orchestrator | 2026-01-07 00:18:55.278392 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-07 00:18:55.947352 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:55.947466 | orchestrator | 2026-01-07 00:18:55.947485 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-07 00:18:56.547701 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:56.547810 | orchestrator | 2026-01-07 00:18:56.547837 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-07 00:18:56.603319 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:18:56.603415 | orchestrator | 2026-01-07 00:18:56.603431 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-07 00:18:56.656352 | orchestrator | ok: [testbed-manager] 2026-01-07 00:18:56.656436 | orchestrator | 2026-01-07 00:18:56.656448 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-07 00:18:57.454337 | orchestrator | changed: [testbed-manager] 2026-01-07 00:18:57.455325 | orchestrator | 2026-01-07 00:18:57.455363 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-07 00:20:03.879719 | orchestrator | changed: [testbed-manager] 2026-01-07 00:20:03.879832 | orchestrator | 2026-01-07 00:20:03.879846 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-07 00:20:04.895812 | orchestrator | ok: [testbed-manager] 2026-01-07 00:20:04.895931 | orchestrator | 2026-01-07 00:20:04.895960 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-07 00:20:04.955008 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:20:04.955097 | orchestrator | 2026-01-07 00:20:04.955114 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-07 00:20:08.022518 | orchestrator | changed: [testbed-manager] 2026-01-07 00:20:08.022605 | orchestrator | 2026-01-07 00:20:08.022620 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-07 00:20:08.119121 | orchestrator | ok: [testbed-manager] 2026-01-07 00:20:08.119212 | orchestrator | 2026-01-07 00:20:08.119233 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-07 00:20:08.119252 | orchestrator | 2026-01-07 00:20:08.119268 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-07 00:20:08.168119 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:20:08.168185 | orchestrator | 2026-01-07 00:20:08.168195 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-07 00:21:08.218416 | orchestrator | Pausing for 60 seconds 2026-01-07 00:21:08.218538 | orchestrator | changed: [testbed-manager] 2026-01-07 00:21:08.218558 | orchestrator | 2026-01-07 00:21:08.218572 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-07 00:21:11.302932 | orchestrator | changed: [testbed-manager] 2026-01-07 00:21:11.303040 | orchestrator | 2026-01-07 00:21:11.303058 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-07 00:21:52.766118 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-07 00:21:52.766290 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-07 00:21:52.766320 | orchestrator | changed: [testbed-manager] 2026-01-07 00:21:52.766341 | orchestrator | 2026-01-07 00:21:52.766359 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-07 00:22:02.863110 | orchestrator | changed: [testbed-manager] 2026-01-07 00:22:02.863225 | orchestrator | 2026-01-07 00:22:02.863244 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-07 00:22:02.954486 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-07 00:22:02.954581 | orchestrator | 2026-01-07 00:22:02.954596 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-07 00:22:02.954608 | orchestrator | 2026-01-07 00:22:02.954618 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-07 00:22:03.003376 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:22:03.003466 | orchestrator | 2026-01-07 00:22:03.003481 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-07 00:22:03.073536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-07 00:22:03.073622 | orchestrator | 2026-01-07 00:22:03.073636 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-07 00:22:03.871869 | orchestrator | changed: [testbed-manager] 2026-01-07 00:22:03.871998 | orchestrator | 2026-01-07 00:22:03.872027 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-07 00:22:06.735507 | orchestrator | ok: [testbed-manager] 2026-01-07 00:22:06.735610 | orchestrator | 2026-01-07 00:22:06.735629 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-07 00:22:06.793740 | orchestrator | ok: [testbed-manager] => { 2026-01-07 00:22:06.793838 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-07 00:22:06.793854 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-07 00:22:06.793866 | orchestrator | "Checking running containers against expected versions...", 2026-01-07 00:22:06.793879 | orchestrator | "", 2026-01-07 00:22:06.793890 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-07 00:22:06.793902 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-01-07 00:22:06.793913 | orchestrator | " Enabled: true", 2026-01-07 00:22:06.793925 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-01-07 00:22:06.793935 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:06.793947 | orchestrator | "", 2026-01-07 00:22:06.793957 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-07 00:22:06.793998 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-01-07 00:22:06.794010 | orchestrator | " Enabled: true", 2026-01-07 00:22:06.794075 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-01-07 00:22:06.794087 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:06.794098 | orchestrator | "", 2026-01-07 00:22:06.794109 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-07 00:22:06.794120 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-01-07 00:22:06.794130 | orchestrator | " Enabled: true", 2026-01-07 00:22:06.794141 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-01-07 00:22:06.794152 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:06.794163 | orchestrator | "", 2026-01-07 00:22:06.794173 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-07 00:22:06.794185 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-01-07 00:22:06.794196 | orchestrator | " Enabled: true", 2026-01-07 00:22:06.794206 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-01-07 00:22:06.794218 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:06.794230 | orchestrator | "", 2026-01-07 00:22:06.794246 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-07 00:22:06.794259 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-01-07 00:22:06.794273 | orchestrator | " Enabled: true", 2026-01-07 00:22:06.794285 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-01-07 00:22:06.794297 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:06.794310 | orchestrator | "", 2026-01-07 00:22:06.794323 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-07 00:22:06.794336 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-07 00:22:06.794349 | orchestrator | " Enabled: true", 2026-01-07 00:22:06.794361 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-07 00:22:06.794374 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:06.794386 | orchestrator | "", 2026-01-07 00:22:06.794398 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-07 00:22:06.794412 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-07 00:22:06.794425 | orchestrator | " Enabled: true", 2026-01-07 00:22:06.794437 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-07 00:22:06.794450 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:06.794464 | orchestrator | "", 2026-01-07 00:22:06.794475 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-07 00:22:06.794486 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-07 00:22:06.794497 | orchestrator | " Enabled: true", 2026-01-07 00:22:06.794508 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-07 00:22:06.794519 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:06.794530 | orchestrator | "", 2026-01-07 00:22:06.794540 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-07 00:22:06.794551 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-01-07 00:22:06.794562 | orchestrator | " Enabled: true", 2026-01-07 00:22:06.794573 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-01-07 00:22:06.794584 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:06.794595 | orchestrator | "", 2026-01-07 00:22:06.794606 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-07 00:22:06.794617 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-07 00:22:06.794628 | orchestrator | " Enabled: true", 2026-01-07 00:22:06.794639 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-07 00:22:06.794676 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:06.794689 | orchestrator | "", 2026-01-07 00:22:06.794699 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-07 00:22:06.794720 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-07 00:22:06.794730 | orchestrator | " Enabled: true", 2026-01-07 00:22:06.794741 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-07 00:22:06.794752 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:06.794762 | orchestrator | "", 2026-01-07 00:22:06.794773 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-07 00:22:06.794784 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-07 00:22:06.794796 | orchestrator | " Enabled: true", 2026-01-07 00:22:06.794816 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-07 00:22:06.794835 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:06.794853 | orchestrator | "", 2026-01-07 00:22:06.794870 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-07 00:22:06.794888 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-07 00:22:06.794907 | orchestrator | " Enabled: true", 2026-01-07 00:22:06.794924 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-07 00:22:06.794943 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:06.794962 | orchestrator | "", 2026-01-07 00:22:06.794979 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-07 00:22:06.794997 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-07 00:22:06.795017 | orchestrator | " Enabled: true", 2026-01-07 00:22:06.795035 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-07 00:22:06.795070 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:06.795082 | orchestrator | "", 2026-01-07 00:22:06.795093 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-07 00:22:06.795104 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-07 00:22:06.795125 | orchestrator | " Enabled: true", 2026-01-07 00:22:06.795136 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-07 00:22:06.795147 | orchestrator | " Status: ✅ MATCH", 2026-01-07 00:22:06.795158 | orchestrator | "", 2026-01-07 00:22:06.795168 | orchestrator | "=== Summary ===", 2026-01-07 00:22:06.795179 | orchestrator | "Errors (version mismatches): 0", 2026-01-07 00:22:06.795190 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-07 00:22:06.795201 | orchestrator | "", 2026-01-07 00:22:06.795212 | orchestrator | "✅ All running containers match expected versions!" 2026-01-07 00:22:06.795223 | orchestrator | ] 2026-01-07 00:22:06.795234 | orchestrator | } 2026-01-07 00:22:06.795246 | orchestrator | 2026-01-07 00:22:06.795256 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-07 00:22:06.849302 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:22:06.849404 | orchestrator | 2026-01-07 00:22:06.849421 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:22:06.849435 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-07 00:22:06.849446 | orchestrator | 2026-01-07 00:22:06.941214 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-07 00:22:06.941313 | orchestrator | + deactivate 2026-01-07 00:22:06.941327 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-07 00:22:06.941341 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-07 00:22:06.941352 | orchestrator | + export PATH 2026-01-07 00:22:06.941364 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-07 00:22:06.941376 | orchestrator | + '[' -n '' ']' 2026-01-07 00:22:06.941387 | orchestrator | + hash -r 2026-01-07 00:22:06.941398 | orchestrator | + '[' -n '' ']' 2026-01-07 00:22:06.941409 | orchestrator | + unset VIRTUAL_ENV 2026-01-07 00:22:06.941419 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-07 00:22:06.941430 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-07 00:22:06.941442 | orchestrator | + unset -f deactivate 2026-01-07 00:22:06.941465 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-07 00:22:06.949698 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-07 00:22:06.949811 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-07 00:22:06.949827 | orchestrator | + local max_attempts=60 2026-01-07 00:22:06.949841 | orchestrator | + local name=ceph-ansible 2026-01-07 00:22:06.949852 | orchestrator | + local attempt_num=1 2026-01-07 00:22:06.951508 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:22:06.980098 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:22:06.980174 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-07 00:22:06.980189 | orchestrator | + local max_attempts=60 2026-01-07 00:22:06.980201 | orchestrator | + local name=kolla-ansible 2026-01-07 00:22:06.980213 | orchestrator | + local attempt_num=1 2026-01-07 00:22:06.980780 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-07 00:22:07.016177 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:22:07.016253 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-07 00:22:07.016266 | orchestrator | + local max_attempts=60 2026-01-07 00:22:07.016278 | orchestrator | + local name=osism-ansible 2026-01-07 00:22:07.016289 | orchestrator | + local attempt_num=1 2026-01-07 00:22:07.017065 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-07 00:22:07.059408 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:22:07.059494 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-07 00:22:07.059509 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-07 00:22:07.792201 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-07 00:22:07.974532 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-07 00:22:07.974716 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-01-07 00:22:07.974739 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-01-07 00:22:07.974752 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-01-07 00:22:07.974765 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-01-07 00:22:07.974799 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-01-07 00:22:07.974811 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-01-07 00:22:07.974822 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 56 seconds (healthy) 2026-01-07 00:22:07.974833 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-01-07 00:22:07.974844 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-01-07 00:22:07.974854 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-01-07 00:22:07.974865 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-01-07 00:22:07.974899 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-01-07 00:22:07.974912 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-01-07 00:22:07.974923 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-01-07 00:22:07.974934 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-01-07 00:22:07.980056 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-07 00:22:08.036021 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-07 00:22:08.036113 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-07 00:22:08.039797 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-07 00:22:20.285939 | orchestrator | 2026-01-07 00:22:20 | INFO  | Task a40155ac-bbaa-4b39-a1f5-720d82a5ce11 (resolvconf) was prepared for execution. 2026-01-07 00:22:20.286133 | orchestrator | 2026-01-07 00:22:20 | INFO  | It takes a moment until task a40155ac-bbaa-4b39-a1f5-720d82a5ce11 (resolvconf) has been started and output is visible here. 2026-01-07 00:22:34.828065 | orchestrator | 2026-01-07 00:22:34.828182 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-07 00:22:34.828221 | orchestrator | 2026-01-07 00:22:34.828234 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:22:34.828246 | orchestrator | Wednesday 07 January 2026 00:22:24 +0000 (0:00:00.136) 0:00:00.136 ***** 2026-01-07 00:22:34.828258 | orchestrator | ok: [testbed-manager] 2026-01-07 00:22:34.828271 | orchestrator | 2026-01-07 00:22:34.828283 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-07 00:22:34.828295 | orchestrator | Wednesday 07 January 2026 00:22:28 +0000 (0:00:04.621) 0:00:04.758 ***** 2026-01-07 00:22:34.828306 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:22:34.828318 | orchestrator | 2026-01-07 00:22:34.828329 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-07 00:22:34.828340 | orchestrator | Wednesday 07 January 2026 00:22:28 +0000 (0:00:00.064) 0:00:04.822 ***** 2026-01-07 00:22:34.828352 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-07 00:22:34.828364 | orchestrator | 2026-01-07 00:22:34.828380 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-07 00:22:34.828396 | orchestrator | Wednesday 07 January 2026 00:22:29 +0000 (0:00:00.083) 0:00:04.905 ***** 2026-01-07 00:22:34.828429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:22:34.828440 | orchestrator | 2026-01-07 00:22:34.828451 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-07 00:22:34.828462 | orchestrator | Wednesday 07 January 2026 00:22:29 +0000 (0:00:00.080) 0:00:04.986 ***** 2026-01-07 00:22:34.828480 | orchestrator | ok: [testbed-manager] 2026-01-07 00:22:34.828493 | orchestrator | 2026-01-07 00:22:34.828504 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-07 00:22:34.828515 | orchestrator | Wednesday 07 January 2026 00:22:30 +0000 (0:00:01.048) 0:00:06.035 ***** 2026-01-07 00:22:34.828526 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:22:34.828537 | orchestrator | 2026-01-07 00:22:34.828547 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-07 00:22:34.828581 | orchestrator | Wednesday 07 January 2026 00:22:30 +0000 (0:00:00.065) 0:00:06.100 ***** 2026-01-07 00:22:34.828593 | orchestrator | ok: [testbed-manager] 2026-01-07 00:22:34.828606 | orchestrator | 2026-01-07 00:22:34.828619 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-07 00:22:34.828631 | orchestrator | Wednesday 07 January 2026 00:22:30 +0000 (0:00:00.534) 0:00:06.634 ***** 2026-01-07 00:22:34.828643 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:22:34.828679 | orchestrator | 2026-01-07 00:22:34.828692 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-07 00:22:34.828705 | orchestrator | Wednesday 07 January 2026 00:22:30 +0000 (0:00:00.075) 0:00:06.709 ***** 2026-01-07 00:22:34.828718 | orchestrator | changed: [testbed-manager] 2026-01-07 00:22:34.828730 | orchestrator | 2026-01-07 00:22:34.828743 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-07 00:22:34.828755 | orchestrator | Wednesday 07 January 2026 00:22:31 +0000 (0:00:00.533) 0:00:07.243 ***** 2026-01-07 00:22:34.828768 | orchestrator | changed: [testbed-manager] 2026-01-07 00:22:34.828781 | orchestrator | 2026-01-07 00:22:34.828794 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-07 00:22:34.828807 | orchestrator | Wednesday 07 January 2026 00:22:32 +0000 (0:00:01.079) 0:00:08.322 ***** 2026-01-07 00:22:34.828820 | orchestrator | ok: [testbed-manager] 2026-01-07 00:22:34.828832 | orchestrator | 2026-01-07 00:22:34.828844 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-07 00:22:34.828857 | orchestrator | Wednesday 07 January 2026 00:22:33 +0000 (0:00:00.979) 0:00:09.302 ***** 2026-01-07 00:22:34.828870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-07 00:22:34.828883 | orchestrator | 2026-01-07 00:22:34.828895 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-07 00:22:34.828908 | orchestrator | Wednesday 07 January 2026 00:22:33 +0000 (0:00:00.062) 0:00:09.365 ***** 2026-01-07 00:22:34.828919 | orchestrator | changed: [testbed-manager] 2026-01-07 00:22:34.828932 | orchestrator | 2026-01-07 00:22:34.828944 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:22:34.828958 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-07 00:22:34.828971 | orchestrator | 2026-01-07 00:22:34.828983 | orchestrator | 2026-01-07 00:22:34.828993 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:22:34.829004 | orchestrator | Wednesday 07 January 2026 00:22:34 +0000 (0:00:01.115) 0:00:10.480 ***** 2026-01-07 00:22:34.829014 | orchestrator | =============================================================================== 2026-01-07 00:22:34.829025 | orchestrator | Gathering Facts --------------------------------------------------------- 4.62s 2026-01-07 00:22:34.829035 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.12s 2026-01-07 00:22:34.829046 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.08s 2026-01-07 00:22:34.829056 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.05s 2026-01-07 00:22:34.829067 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.98s 2026-01-07 00:22:34.829078 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2026-01-07 00:22:34.829106 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.53s 2026-01-07 00:22:34.829118 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-01-07 00:22:34.829129 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-01-07 00:22:34.829139 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-01-07 00:22:34.829160 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-01-07 00:22:34.829171 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-01-07 00:22:34.829181 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.06s 2026-01-07 00:22:35.079945 | orchestrator | + osism apply sshconfig 2026-01-07 00:22:47.110956 | orchestrator | 2026-01-07 00:22:47 | INFO  | Task 7f9a9404-3160-4575-be57-c834222e3a26 (sshconfig) was prepared for execution. 2026-01-07 00:22:47.111061 | orchestrator | 2026-01-07 00:22:47 | INFO  | It takes a moment until task 7f9a9404-3160-4575-be57-c834222e3a26 (sshconfig) has been started and output is visible here. 2026-01-07 00:22:58.406801 | orchestrator | 2026-01-07 00:22:58.406914 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-07 00:22:58.406931 | orchestrator | 2026-01-07 00:22:58.406966 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-07 00:22:58.406979 | orchestrator | Wednesday 07 January 2026 00:22:51 +0000 (0:00:00.153) 0:00:00.153 ***** 2026-01-07 00:22:58.406991 | orchestrator | ok: [testbed-manager] 2026-01-07 00:22:58.407003 | orchestrator | 2026-01-07 00:22:58.407015 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-07 00:22:58.407026 | orchestrator | Wednesday 07 January 2026 00:22:51 +0000 (0:00:00.537) 0:00:00.691 ***** 2026-01-07 00:22:58.407037 | orchestrator | changed: [testbed-manager] 2026-01-07 00:22:58.407049 | orchestrator | 2026-01-07 00:22:58.407060 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-07 00:22:58.407071 | orchestrator | Wednesday 07 January 2026 00:22:52 +0000 (0:00:00.525) 0:00:01.216 ***** 2026-01-07 00:22:58.407082 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-07 00:22:58.407093 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-07 00:22:58.407104 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-07 00:22:58.407114 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-07 00:22:58.407125 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-07 00:22:58.407136 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-07 00:22:58.407146 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-07 00:22:58.407157 | orchestrator | 2026-01-07 00:22:58.407168 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-07 00:22:58.407178 | orchestrator | Wednesday 07 January 2026 00:22:57 +0000 (0:00:05.401) 0:00:06.618 ***** 2026-01-07 00:22:58.407189 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:22:58.407200 | orchestrator | 2026-01-07 00:22:58.407210 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-07 00:22:58.407221 | orchestrator | Wednesday 07 January 2026 00:22:57 +0000 (0:00:00.076) 0:00:06.695 ***** 2026-01-07 00:22:58.407232 | orchestrator | changed: [testbed-manager] 2026-01-07 00:22:58.407243 | orchestrator | 2026-01-07 00:22:58.407254 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:22:58.407266 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:22:58.407278 | orchestrator | 2026-01-07 00:22:58.407288 | orchestrator | 2026-01-07 00:22:58.407299 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:22:58.407310 | orchestrator | Wednesday 07 January 2026 00:22:58 +0000 (0:00:00.555) 0:00:07.250 ***** 2026-01-07 00:22:58.407321 | orchestrator | =============================================================================== 2026-01-07 00:22:58.407332 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.40s 2026-01-07 00:22:58.407343 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2026-01-07 00:22:58.407353 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.54s 2026-01-07 00:22:58.407429 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2026-01-07 00:22:58.407441 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-01-07 00:22:58.675890 | orchestrator | + osism apply known-hosts 2026-01-07 00:23:10.601639 | orchestrator | 2026-01-07 00:23:10 | INFO  | Task 03c4e4e9-f32b-48fa-89a5-8880670334f5 (known-hosts) was prepared for execution. 2026-01-07 00:23:10.601765 | orchestrator | 2026-01-07 00:23:10 | INFO  | It takes a moment until task 03c4e4e9-f32b-48fa-89a5-8880670334f5 (known-hosts) has been started and output is visible here. 2026-01-07 00:23:27.058358 | orchestrator | 2026-01-07 00:23:27.058466 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-07 00:23:27.058478 | orchestrator | 2026-01-07 00:23:27.058485 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-07 00:23:27.058493 | orchestrator | Wednesday 07 January 2026 00:23:14 +0000 (0:00:00.153) 0:00:00.153 ***** 2026-01-07 00:23:27.058501 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-07 00:23:27.058508 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-07 00:23:27.058515 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-07 00:23:27.058521 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-07 00:23:27.058527 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-07 00:23:27.058534 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-07 00:23:27.058540 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-07 00:23:27.058547 | orchestrator | 2026-01-07 00:23:27.058553 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-07 00:23:27.058560 | orchestrator | Wednesday 07 January 2026 00:23:20 +0000 (0:00:05.974) 0:00:06.127 ***** 2026-01-07 00:23:27.058568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-07 00:23:27.058576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-07 00:23:27.058583 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-07 00:23:27.058589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-07 00:23:27.058595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-07 00:23:27.058609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-07 00:23:27.058616 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-07 00:23:27.058622 | orchestrator | 2026-01-07 00:23:27.058628 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:27.058634 | orchestrator | Wednesday 07 January 2026 00:23:20 +0000 (0:00:00.161) 0:00:06.289 ***** 2026-01-07 00:23:27.058641 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPJKL/GGZPmUZzso66DzsjCQZc2HVn2k4FZaxw9Vj9fX) 2026-01-07 00:23:27.058699 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjT9OV4lZLrIYZIEj40A5lMFkC4Wd3FJaJye53S9eCx6H8xgUdbUOLKzi1/MFXHuKLJ7yAW11y6YQ64YMPHC/tiwpWeFwPlKFVi+e1ph+bF4p4YCGaZzpWp5xMMXVvnp6RgTjsocfxMId7PoVvZYK7F6vlMWcm/8CxZhnEo+s6Q+EceSn8ssqrul0yDPE8FinqMiaF8IPKQJ6dmOEbBI2ygNnnI4kh0P/UfLkXZhJzhL5qDv1sTLjI44i3X9yAweRC9Q/QSvabuobq7QiM/xikHXJqKhQdMxMtX+MxZv6A14dP9jK+s7cnjtCeli4z4tfwPaQT6IhUqP/jVgaMvKsNENA5nbZsB4tgVXQ9Qm0YUshEJhKsmxvBrR8VmhnpoL0rn+4lSisvUjBAOhHnrFoct8A+3WeVJ1H6Z8bW1R20PUVgslQxwA9wY6NVJDCjZ9I8GBYVae2GQHE3B8rdAWDYu4QxhFv4A1JCNYW1I7U8H2aKt5D+S061MYkupkUE9Is=) 2026-01-07 00:23:27.058727 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJcZmxslssTh4eAB2SGRFNrjwGtQiq38td0oNQTGly3ey4d8Zv4lVFmXSbagyiyubF3/rOFFGfZ1KRPuaS1DLlM=) 2026-01-07 00:23:27.058736 | orchestrator | 2026-01-07 00:23:27.058742 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:27.058748 | orchestrator | Wednesday 07 January 2026 00:23:21 +0000 (0:00:01.146) 0:00:07.436 ***** 2026-01-07 00:23:27.058754 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAq6dRQL5qZGGaG7dzg5agJ0CXtla9nOdEpna+gzl7dBPRNS8a1Dkh7sZzpxiWAsxlj8KTzaEF+CjBT9j2gjxaE=) 2026-01-07 00:23:27.058761 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL+1HGa+AmPqyTZSGdUSHreBhvm+18UL7MYyVSiM4X9x) 2026-01-07 00:23:27.058787 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFuiwUtWGSwR5JMwGQh5X11HpKDWYiuFTWpUg/hH3mCsnpF/YmrqhNdWyRo3sXE8uVzsJKJ4GIVcgDeZhAUq83Z52T5Oq9wqBpTDUP12f9WcZXkpu7/KGFhsglHiawkSyg5kJQw5aMENeFUxj+l/u2aoZbwiRWQL7T0gkBQrAn8ZiKVsGd1/4k4LRuxFgOMILqFZ0WnTsrszcpp62k1mJU8o5wmZFlH7ouaOYNivTzLPhHrovVkuiEk6658bn8WG3YME07/ticNDNzmsdAkVS/ToRgqbndml9AmOUB4fn8EwNgU4a26NCU/KRPftMPSrJZfWG/KGlZjGrkvR8uN5HGzwqF9TMTPJBNtWOxahxxOMsuM06IdPfwJfRkDnPcNtSaD0ySYi2rbc2Ra7QifbFRreSf8Kk2bibXI9QZvzgr1I4ZRS3/X+dAvCzRSuFRzW3r00XzSVmZiCr+ebtuTR83s0xK1iuYORkNsAimrItaoFKybYENWX0OCkqmn/A5rD8=) 2026-01-07 00:23:27.058794 | orchestrator | 2026-01-07 00:23:27.058801 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:27.058807 | orchestrator | Wednesday 07 January 2026 00:23:22 +0000 (0:00:01.064) 0:00:08.500 ***** 2026-01-07 00:23:27.058814 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIUOIImMKwoO4dQWYuu8o+7FCvI3Yp07GdHmvic2qnze2hS0XsEtwqr0IJco4d5tnfVwnMBZh6t2Ryt/6i2f5i4=) 2026-01-07 00:23:27.058820 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCq2T0P3Rfva59U6yYxuo7jYGFehgJ/PwwzsXnorLyqyGU5+dRUzryJk7/p0ohYNbe5HtPxlLQWdC5WnDZ4XSxYnh/cr66LRn3LnrCWYfEi0qWMxrhLp9zZyiZi4xoeUv9vqOn6nakcbqH7klCb/TsMUIjrTrnX/4hBgx3/mPCVxvgWVr+t0qoOH1ALydxXFecpIaWu6Fx41xscU5xnXET4nVnLLQbbt1QZXFRSpfxh+9fu00WB8J9fhcYTHa38w6uf1ImbYnVFj3uoHcV3QW8t/0u4M3GeicCQPy8Xa99rD0KzDmAJQ0rilqtoOGuIFt5+ymC6gKA+iE4q1znSEUX+zUJ3YyMyxouJEwHGilOojxVWxxQ60uUDnhOwPuIUmVmdHjqXgdHLYmmxUrXFcfzuucJNC7UvoK3UH3yZAmNV/Ki2/GCbXn2NSUTMLN/OrAu4Yteyrfj5QjXESHx9shcrJFUX2xWRWrWvRCf4scYgaRFRWulU8V0rN8zg3MK1HU0=) 2026-01-07 00:23:27.058827 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINhS1b3vjFENegpnVlgtbr/2+fCji7qSvuoHK+qMTPTx) 2026-01-07 00:23:27.058833 | orchestrator | 2026-01-07 00:23:27.058839 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:27.058846 | orchestrator | Wednesday 07 January 2026 00:23:23 +0000 (0:00:01.034) 0:00:09.535 ***** 2026-01-07 00:23:27.058852 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkytWrrdKAgrJanW+dRrLiz7cinkgqu4zr07UKSh1KFL9mP3Zixnme33WZkDTTSHQBBUlz82WkouIoOrMTkad99OT8w7rBs9s0fZ6WgVFyyxqw5GxrYnVbKIpuOkS3gKp7Tk4qoTbyCfh4pynGNPLKVCvRHaa+S2JQcgH0+1Rt53btKqLLiv66+MKptg1B5SlxG1SSXQPtI+0P4uMPSltm4CsDv07et0wfgflVKkW/XwPr+RuQ5dLHvTKeRziXAZX/tkUEViTAbMrV91KqOMVc22dUG8P6NDr/Iq8XZMlGfLvR1qu7GDUfwkEGC/hb40StFR8l+ACYQ8O5JFt+25rum2PK0sN3eH2SxVtUq55IotnQWsW8aQUkzoJMq0H59kz2jzaDcrsh7Th3cTHsfPm2otyH8Dtg9gIbsWOxOPhAU7IkorZuH/im82dKx21fxrhrYADrSFPLz38WGWUIne3Jgbx4/as/giZhDX/pDLHypE9mglSS4dx9e3IK/fb/KFE=) 2026-01-07 00:23:27.058864 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAUt4B+tBhiVaCx10t7iMRu50p7cPQR4l2tltMF9ri18l9MAlXSTUDuBZX4Ycq8qTINw1KffQygLTndcMfnRWYw=) 2026-01-07 00:23:27.058881 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDuYxvjdTnxe75u4LCiaaOjAYtwYyuO1Oo61oOhcDIBQ) 2026-01-07 00:23:27.058895 | orchestrator | 2026-01-07 00:23:27.058903 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:27.058911 | orchestrator | Wednesday 07 January 2026 00:23:25 +0000 (0:00:01.035) 0:00:10.571 ***** 2026-01-07 00:23:27.058975 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMf3xYUx7iM0kxGfHsdxRRcmnBoU1XAHOawERgU6ZpZ/YhdGv9wkiauSJ8I4KGWl2YXHMIABvu2e6ve1YdJnBFg=) 2026-01-07 00:23:27.058984 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDGmVcBv8mE5Ix8R+tWLPGdghNGXQAHhfeT+FhOQo4iP) 2026-01-07 00:23:27.058992 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0ZqTu9TDtwrEGtxntAvTn8vTO1JdYN1oMg5pmvOgdgWkja+uMFJGrLBxIr0mpOoo5J6cFPpmEt8ef93Cll8cxhCC9fHkux+wCKhWUL/3sKlfLVzvocnUHkajpXaz0/GuuL6cIwgyEoR4BmMVlhnim9E1t44XRSnTG/0DPzlxLPQjKyLYPUn0uCFwS4/QVQMgBsZLq6Q6EMLgIcPhIjv2rplYT7+gTEnM9xoyIC+OgzS7b8oH9in+J2CngYlXPadxQ3Nz6I/bicFjgYYC8TrlvUeoeR9uQQd42cXRzO4bYPrIH46tTrAyxYHYx09Rd23OF2uE+XISJJ2seH6TnLEIc4p5dHA6P22K+dTwWA6MxkpZW9zZLGoAnDxXWF7aKpCFlXJTZstRoIV9wCYn8CcrvnhGunZaqU9gxSGIRMuA22iuAGf2XcAQbOUPd46fFOUiaIslgSxP6tMVdqHL+AIwYYXSfUO0Aol1Uived1vy5SF4AobbkbTwCgwsHOODKpsM=) 2026-01-07 00:23:27.058999 | orchestrator | 2026-01-07 00:23:27.059008 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:27.059015 | orchestrator | Wednesday 07 January 2026 00:23:26 +0000 (0:00:01.035) 0:00:11.606 ***** 2026-01-07 00:23:27.059027 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBuwwk8dPS0t9AwotfnlvX+b2aCZJikYRIbtBJJ4rDXJ+df3+bzwQbaoV7lIiBL25OMOtlwKEmTjNnMGQNIAugw=) 2026-01-07 00:23:37.635848 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQlD6VVrYehlb8DwJMcwK73NfqnrCnyYDLj2ufoTrS4) 2026-01-07 00:23:37.635961 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIgTf5F5r6CNKT/TF6cz6e5wm7svm9zxi7ampbj/WDlMrvrmAY+llPLluBDwykHe3QJaaYZ/mYYF7qpDGgymbneRwTeZIDQvIFn/hQmGJ3qDIpLYdj8mKLb3HInR98PETCTDVFnNEyUaWHxe/nnOCmw0Id8+PEIHNn3QcjRrymTyqbl1rwoLA0ncjEOfozI1n9RhHbioxAJgBDIuJkHBzQ2T2K/qhumNvufG/JWNBQB+pU5HJDaiHRASEoVAnqOHxRHdw9uFBeQy0X66ji/unkLlmGtOeHhRiPCMRhv2IgDr/oJMkWqId7dv1eEa70G7i9oL+Cmc+uNED1l+QLQoSy4yesVAFEyxBoUTCG5+LghtuxdHoAVHGrN6j6h6XHeo+l5jqzz9NVyNA5H1+RUKNgZTtUcK88psHZVC5vxJ6oBs74bAv3CGFmnyu4S+enNGdd4UxF6AyZ1C0ypCX1ChhtFllaFv6DkkpxyV7xoXh047yrk7+6q9izdqg0MFrZDG8=) 2026-01-07 00:23:37.635972 | orchestrator | 2026-01-07 00:23:37.635977 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:37.635982 | orchestrator | Wednesday 07 January 2026 00:23:27 +0000 (0:00:01.009) 0:00:12.616 ***** 2026-01-07 00:23:37.635987 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE2unr15xrIf6zz9rWvqhMtOuSGiCvNx+t2ZdVicpbq5/kSMPa3GH5rte2tyXypGsPgLgPyd6N4hxVEfZOuD0rc=) 2026-01-07 00:23:37.635994 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDggXUNOeA68kyHaLGJwT7XIAz3Yj0mxJ8MJiOklybM+/m7T9TWSB1O9Ys0NuYHbyr2O4vnfoSUTBoePAPc7h/OdKGPZzED5TzF3Kkx/T6Bhjbdnh1IydJ5Vyw2paVNkVmzU4WFPTaZfdP6Gc/gwZ4OCrgJE4UEQSLgBzb6WnODTJn5+Byuo13JgH5cE3ND1BuB7Yh9y4ky58L60Zt/pLj/F1Ht3n0OqfGkQuyNsyMTCluuZnLBiwfNYP5bYbbxa1LhC25/D5kWIKSg/2oV/XsDfCQ45mbbjhSzWdPlAINAYT6NxUXqfMEzsMAofWQhtbdTsWOaN6kCa/NhtVdnQ0/mvd5eDeQRaGm2QnmnzpioF/Azcrs0MCF16cihpBxJXYlWjAhBYMfIpcjlYkdEzhwGIVCckXwWtFWMfEZ/mz6JPiSrVZ/O4aA7KCQeVsRbTv7pSuH2nTVralU7kExicKJ8EF50y4FfJVpXueBQX3AyMHYwvcw65DU1n5VovNpM5z8=) 2026-01-07 00:23:37.636018 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL16OZq64tlUqJzew/iI66aEbC6cV08jKlFxQ9zgwDYn) 2026-01-07 00:23:37.636022 | orchestrator | 2026-01-07 00:23:37.636026 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-07 00:23:37.636031 | orchestrator | Wednesday 07 January 2026 00:23:28 +0000 (0:00:00.980) 0:00:13.596 ***** 2026-01-07 00:23:37.636036 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-07 00:23:37.636040 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-07 00:23:37.636044 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-07 00:23:37.636047 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-07 00:23:37.636051 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-07 00:23:37.636055 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-07 00:23:37.636058 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-07 00:23:37.636062 | orchestrator | 2026-01-07 00:23:37.636066 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-07 00:23:37.636071 | orchestrator | Wednesday 07 January 2026 00:23:33 +0000 (0:00:05.180) 0:00:18.776 ***** 2026-01-07 00:23:37.636076 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-07 00:23:37.636083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-07 00:23:37.636087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-07 00:23:37.636091 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-07 00:23:37.636094 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-07 00:23:37.636098 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-07 00:23:37.636102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-07 00:23:37.636105 | orchestrator | 2026-01-07 00:23:37.636120 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:37.636124 | orchestrator | Wednesday 07 January 2026 00:23:33 +0000 (0:00:00.195) 0:00:18.971 ***** 2026-01-07 00:23:37.636128 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPJKL/GGZPmUZzso66DzsjCQZc2HVn2k4FZaxw9Vj9fX) 2026-01-07 00:23:37.636149 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjT9OV4lZLrIYZIEj40A5lMFkC4Wd3FJaJye53S9eCx6H8xgUdbUOLKzi1/MFXHuKLJ7yAW11y6YQ64YMPHC/tiwpWeFwPlKFVi+e1ph+bF4p4YCGaZzpWp5xMMXVvnp6RgTjsocfxMId7PoVvZYK7F6vlMWcm/8CxZhnEo+s6Q+EceSn8ssqrul0yDPE8FinqMiaF8IPKQJ6dmOEbBI2ygNnnI4kh0P/UfLkXZhJzhL5qDv1sTLjI44i3X9yAweRC9Q/QSvabuobq7QiM/xikHXJqKhQdMxMtX+MxZv6A14dP9jK+s7cnjtCeli4z4tfwPaQT6IhUqP/jVgaMvKsNENA5nbZsB4tgVXQ9Qm0YUshEJhKsmxvBrR8VmhnpoL0rn+4lSisvUjBAOhHnrFoct8A+3WeVJ1H6Z8bW1R20PUVgslQxwA9wY6NVJDCjZ9I8GBYVae2GQHE3B8rdAWDYu4QxhFv4A1JCNYW1I7U8H2aKt5D+S061MYkupkUE9Is=) 2026-01-07 00:23:37.636159 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJcZmxslssTh4eAB2SGRFNrjwGtQiq38td0oNQTGly3ey4d8Zv4lVFmXSbagyiyubF3/rOFFGfZ1KRPuaS1DLlM=) 2026-01-07 00:23:37.636165 | orchestrator | 2026-01-07 00:23:37.636171 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:37.636181 | orchestrator | Wednesday 07 January 2026 00:23:34 +0000 (0:00:01.068) 0:00:20.040 ***** 2026-01-07 00:23:37.636187 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFuiwUtWGSwR5JMwGQh5X11HpKDWYiuFTWpUg/hH3mCsnpF/YmrqhNdWyRo3sXE8uVzsJKJ4GIVcgDeZhAUq83Z52T5Oq9wqBpTDUP12f9WcZXkpu7/KGFhsglHiawkSyg5kJQw5aMENeFUxj+l/u2aoZbwiRWQL7T0gkBQrAn8ZiKVsGd1/4k4LRuxFgOMILqFZ0WnTsrszcpp62k1mJU8o5wmZFlH7ouaOYNivTzLPhHrovVkuiEk6658bn8WG3YME07/ticNDNzmsdAkVS/ToRgqbndml9AmOUB4fn8EwNgU4a26NCU/KRPftMPSrJZfWG/KGlZjGrkvR8uN5HGzwqF9TMTPJBNtWOxahxxOMsuM06IdPfwJfRkDnPcNtSaD0ySYi2rbc2Ra7QifbFRreSf8Kk2bibXI9QZvzgr1I4ZRS3/X+dAvCzRSuFRzW3r00XzSVmZiCr+ebtuTR83s0xK1iuYORkNsAimrItaoFKybYENWX0OCkqmn/A5rD8=) 2026-01-07 00:23:37.636194 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAq6dRQL5qZGGaG7dzg5agJ0CXtla9nOdEpna+gzl7dBPRNS8a1Dkh7sZzpxiWAsxlj8KTzaEF+CjBT9j2gjxaE=) 2026-01-07 00:23:37.636199 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL+1HGa+AmPqyTZSGdUSHreBhvm+18UL7MYyVSiM4X9x) 2026-01-07 00:23:37.636205 | orchestrator | 2026-01-07 00:23:37.636211 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:37.636218 | orchestrator | Wednesday 07 January 2026 00:23:35 +0000 (0:00:01.037) 0:00:21.077 ***** 2026-01-07 00:23:37.636224 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCq2T0P3Rfva59U6yYxuo7jYGFehgJ/PwwzsXnorLyqyGU5+dRUzryJk7/p0ohYNbe5HtPxlLQWdC5WnDZ4XSxYnh/cr66LRn3LnrCWYfEi0qWMxrhLp9zZyiZi4xoeUv9vqOn6nakcbqH7klCb/TsMUIjrTrnX/4hBgx3/mPCVxvgWVr+t0qoOH1ALydxXFecpIaWu6Fx41xscU5xnXET4nVnLLQbbt1QZXFRSpfxh+9fu00WB8J9fhcYTHa38w6uf1ImbYnVFj3uoHcV3QW8t/0u4M3GeicCQPy8Xa99rD0KzDmAJQ0rilqtoOGuIFt5+ymC6gKA+iE4q1znSEUX+zUJ3YyMyxouJEwHGilOojxVWxxQ60uUDnhOwPuIUmVmdHjqXgdHLYmmxUrXFcfzuucJNC7UvoK3UH3yZAmNV/Ki2/GCbXn2NSUTMLN/OrAu4Yteyrfj5QjXESHx9shcrJFUX2xWRWrWvRCf4scYgaRFRWulU8V0rN8zg3MK1HU0=) 2026-01-07 00:23:37.636230 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIUOIImMKwoO4dQWYuu8o+7FCvI3Yp07GdHmvic2qnze2hS0XsEtwqr0IJco4d5tnfVwnMBZh6t2Ryt/6i2f5i4=) 2026-01-07 00:23:37.636237 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINhS1b3vjFENegpnVlgtbr/2+fCji7qSvuoHK+qMTPTx) 2026-01-07 00:23:37.636242 | orchestrator | 2026-01-07 00:23:37.636248 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:37.636254 | orchestrator | Wednesday 07 January 2026 00:23:36 +0000 (0:00:01.017) 0:00:22.095 ***** 2026-01-07 00:23:37.636263 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkytWrrdKAgrJanW+dRrLiz7cinkgqu4zr07UKSh1KFL9mP3Zixnme33WZkDTTSHQBBUlz82WkouIoOrMTkad99OT8w7rBs9s0fZ6WgVFyyxqw5GxrYnVbKIpuOkS3gKp7Tk4qoTbyCfh4pynGNPLKVCvRHaa+S2JQcgH0+1Rt53btKqLLiv66+MKptg1B5SlxG1SSXQPtI+0P4uMPSltm4CsDv07et0wfgflVKkW/XwPr+RuQ5dLHvTKeRziXAZX/tkUEViTAbMrV91KqOMVc22dUG8P6NDr/Iq8XZMlGfLvR1qu7GDUfwkEGC/hb40StFR8l+ACYQ8O5JFt+25rum2PK0sN3eH2SxVtUq55IotnQWsW8aQUkzoJMq0H59kz2jzaDcrsh7Th3cTHsfPm2otyH8Dtg9gIbsWOxOPhAU7IkorZuH/im82dKx21fxrhrYADrSFPLz38WGWUIne3Jgbx4/as/giZhDX/pDLHypE9mglSS4dx9e3IK/fb/KFE=) 2026-01-07 00:23:41.815313 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAUt4B+tBhiVaCx10t7iMRu50p7cPQR4l2tltMF9ri18l9MAlXSTUDuBZX4Ycq8qTINw1KffQygLTndcMfnRWYw=) 2026-01-07 00:23:41.815452 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDuYxvjdTnxe75u4LCiaaOjAYtwYyuO1Oo61oOhcDIBQ) 2026-01-07 00:23:41.815470 | orchestrator | 2026-01-07 00:23:41.815482 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:41.815496 | orchestrator | Wednesday 07 January 2026 00:23:37 +0000 (0:00:01.095) 0:00:23.191 ***** 2026-01-07 00:23:41.815507 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDGmVcBv8mE5Ix8R+tWLPGdghNGXQAHhfeT+FhOQo4iP) 2026-01-07 00:23:41.815521 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0ZqTu9TDtwrEGtxntAvTn8vTO1JdYN1oMg5pmvOgdgWkja+uMFJGrLBxIr0mpOoo5J6cFPpmEt8ef93Cll8cxhCC9fHkux+wCKhWUL/3sKlfLVzvocnUHkajpXaz0/GuuL6cIwgyEoR4BmMVlhnim9E1t44XRSnTG/0DPzlxLPQjKyLYPUn0uCFwS4/QVQMgBsZLq6Q6EMLgIcPhIjv2rplYT7+gTEnM9xoyIC+OgzS7b8oH9in+J2CngYlXPadxQ3Nz6I/bicFjgYYC8TrlvUeoeR9uQQd42cXRzO4bYPrIH46tTrAyxYHYx09Rd23OF2uE+XISJJ2seH6TnLEIc4p5dHA6P22K+dTwWA6MxkpZW9zZLGoAnDxXWF7aKpCFlXJTZstRoIV9wCYn8CcrvnhGunZaqU9gxSGIRMuA22iuAGf2XcAQbOUPd46fFOUiaIslgSxP6tMVdqHL+AIwYYXSfUO0Aol1Uived1vy5SF4AobbkbTwCgwsHOODKpsM=) 2026-01-07 00:23:41.815535 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMf3xYUx7iM0kxGfHsdxRRcmnBoU1XAHOawERgU6ZpZ/YhdGv9wkiauSJ8I4KGWl2YXHMIABvu2e6ve1YdJnBFg=) 2026-01-07 00:23:41.815546 | orchestrator | 2026-01-07 00:23:41.815557 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:41.815567 | orchestrator | Wednesday 07 January 2026 00:23:38 +0000 (0:00:01.016) 0:00:24.208 ***** 2026-01-07 00:23:41.815579 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBuwwk8dPS0t9AwotfnlvX+b2aCZJikYRIbtBJJ4rDXJ+df3+bzwQbaoV7lIiBL25OMOtlwKEmTjNnMGQNIAugw=) 2026-01-07 00:23:41.815590 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQlD6VVrYehlb8DwJMcwK73NfqnrCnyYDLj2ufoTrS4) 2026-01-07 00:23:41.815601 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIgTf5F5r6CNKT/TF6cz6e5wm7svm9zxi7ampbj/WDlMrvrmAY+llPLluBDwykHe3QJaaYZ/mYYF7qpDGgymbneRwTeZIDQvIFn/hQmGJ3qDIpLYdj8mKLb3HInR98PETCTDVFnNEyUaWHxe/nnOCmw0Id8+PEIHNn3QcjRrymTyqbl1rwoLA0ncjEOfozI1n9RhHbioxAJgBDIuJkHBzQ2T2K/qhumNvufG/JWNBQB+pU5HJDaiHRASEoVAnqOHxRHdw9uFBeQy0X66ji/unkLlmGtOeHhRiPCMRhv2IgDr/oJMkWqId7dv1eEa70G7i9oL+Cmc+uNED1l+QLQoSy4yesVAFEyxBoUTCG5+LghtuxdHoAVHGrN6j6h6XHeo+l5jqzz9NVyNA5H1+RUKNgZTtUcK88psHZVC5vxJ6oBs74bAv3CGFmnyu4S+enNGdd4UxF6AyZ1C0ypCX1ChhtFllaFv6DkkpxyV7xoXh047yrk7+6q9izdqg0MFrZDG8=) 2026-01-07 00:23:41.815612 | orchestrator | 2026-01-07 00:23:41.815624 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-07 00:23:41.815635 | orchestrator | Wednesday 07 January 2026 00:23:39 +0000 (0:00:01.025) 0:00:25.233 ***** 2026-01-07 00:23:41.815719 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDggXUNOeA68kyHaLGJwT7XIAz3Yj0mxJ8MJiOklybM+/m7T9TWSB1O9Ys0NuYHbyr2O4vnfoSUTBoePAPc7h/OdKGPZzED5TzF3Kkx/T6Bhjbdnh1IydJ5Vyw2paVNkVmzU4WFPTaZfdP6Gc/gwZ4OCrgJE4UEQSLgBzb6WnODTJn5+Byuo13JgH5cE3ND1BuB7Yh9y4ky58L60Zt/pLj/F1Ht3n0OqfGkQuyNsyMTCluuZnLBiwfNYP5bYbbxa1LhC25/D5kWIKSg/2oV/XsDfCQ45mbbjhSzWdPlAINAYT6NxUXqfMEzsMAofWQhtbdTsWOaN6kCa/NhtVdnQ0/mvd5eDeQRaGm2QnmnzpioF/Azcrs0MCF16cihpBxJXYlWjAhBYMfIpcjlYkdEzhwGIVCckXwWtFWMfEZ/mz6JPiSrVZ/O4aA7KCQeVsRbTv7pSuH2nTVralU7kExicKJ8EF50y4FfJVpXueBQX3AyMHYwvcw65DU1n5VovNpM5z8=) 2026-01-07 00:23:41.815733 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE2unr15xrIf6zz9rWvqhMtOuSGiCvNx+t2ZdVicpbq5/kSMPa3GH5rte2tyXypGsPgLgPyd6N4hxVEfZOuD0rc=) 2026-01-07 00:23:41.815744 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL16OZq64tlUqJzew/iI66aEbC6cV08jKlFxQ9zgwDYn) 2026-01-07 00:23:41.815771 | orchestrator | 2026-01-07 00:23:41.815782 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-07 00:23:41.815793 | orchestrator | Wednesday 07 January 2026 00:23:40 +0000 (0:00:00.997) 0:00:26.231 ***** 2026-01-07 00:23:41.815805 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-07 00:23:41.815816 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-07 00:23:41.815846 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-07 00:23:41.815858 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-07 00:23:41.815869 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-07 00:23:41.815879 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-07 00:23:41.815889 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-07 00:23:41.815900 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:23:41.815911 | orchestrator | 2026-01-07 00:23:41.815922 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-07 00:23:41.815933 | orchestrator | Wednesday 07 January 2026 00:23:40 +0000 (0:00:00.162) 0:00:26.394 ***** 2026-01-07 00:23:41.815943 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:23:41.815954 | orchestrator | 2026-01-07 00:23:41.815964 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-07 00:23:41.815975 | orchestrator | Wednesday 07 January 2026 00:23:40 +0000 (0:00:00.058) 0:00:26.452 ***** 2026-01-07 00:23:41.815986 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:23:41.815996 | orchestrator | 2026-01-07 00:23:41.816007 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-07 00:23:41.816018 | orchestrator | Wednesday 07 January 2026 00:23:40 +0000 (0:00:00.063) 0:00:26.515 ***** 2026-01-07 00:23:41.816029 | orchestrator | changed: [testbed-manager] 2026-01-07 00:23:41.816039 | orchestrator | 2026-01-07 00:23:41.816050 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:23:41.816067 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-07 00:23:41.816079 | orchestrator | 2026-01-07 00:23:41.816090 | orchestrator | 2026-01-07 00:23:41.816100 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:23:41.816111 | orchestrator | Wednesday 07 January 2026 00:23:41 +0000 (0:00:00.685) 0:00:27.201 ***** 2026-01-07 00:23:41.816122 | orchestrator | =============================================================================== 2026-01-07 00:23:41.816133 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.97s 2026-01-07 00:23:41.816144 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.18s 2026-01-07 00:23:41.816155 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-01-07 00:23:41.816187 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-07 00:23:41.816198 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-07 00:23:41.816209 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-07 00:23:41.816220 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-01-07 00:23:41.816231 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-01-07 00:23:41.816241 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-01-07 00:23:41.816252 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-01-07 00:23:41.816263 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-01-07 00:23:41.816274 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-01-07 00:23:41.816284 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-01-07 00:23:41.816303 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-01-07 00:23:41.816314 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-01-07 00:23:41.816324 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-01-07 00:23:41.816335 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.69s 2026-01-07 00:23:41.816346 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2026-01-07 00:23:41.816358 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-01-07 00:23:41.816369 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-01-07 00:23:42.080263 | orchestrator | + osism apply squid 2026-01-07 00:23:54.136466 | orchestrator | 2026-01-07 00:23:54 | INFO  | Task 25f5e481-5a08-43d0-8418-cf4bfa4525d6 (squid) was prepared for execution. 2026-01-07 00:23:54.136570 | orchestrator | 2026-01-07 00:23:54 | INFO  | It takes a moment until task 25f5e481-5a08-43d0-8418-cf4bfa4525d6 (squid) has been started and output is visible here. 2026-01-07 00:25:51.899250 | orchestrator | 2026-01-07 00:25:51.899355 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-07 00:25:51.899369 | orchestrator | 2026-01-07 00:25:51.899380 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-07 00:25:51.899391 | orchestrator | Wednesday 07 January 2026 00:23:58 +0000 (0:00:00.156) 0:00:00.156 ***** 2026-01-07 00:25:51.899402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:25:51.899413 | orchestrator | 2026-01-07 00:25:51.899423 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-07 00:25:51.899433 | orchestrator | Wednesday 07 January 2026 00:23:58 +0000 (0:00:00.078) 0:00:00.234 ***** 2026-01-07 00:25:51.899443 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:51.899459 | orchestrator | 2026-01-07 00:25:51.899477 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-07 00:25:51.899494 | orchestrator | Wednesday 07 January 2026 00:23:59 +0000 (0:00:01.398) 0:00:01.633 ***** 2026-01-07 00:25:51.899511 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-07 00:25:51.899527 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-07 00:25:51.899544 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-07 00:25:51.899560 | orchestrator | 2026-01-07 00:25:51.899576 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-07 00:25:51.899592 | orchestrator | Wednesday 07 January 2026 00:24:00 +0000 (0:00:01.148) 0:00:02.781 ***** 2026-01-07 00:25:51.899609 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-07 00:25:51.899681 | orchestrator | 2026-01-07 00:25:51.899702 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-07 00:25:51.899719 | orchestrator | Wednesday 07 January 2026 00:24:01 +0000 (0:00:01.058) 0:00:03.840 ***** 2026-01-07 00:25:51.899736 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:51.899752 | orchestrator | 2026-01-07 00:25:51.899770 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-07 00:25:51.899788 | orchestrator | Wednesday 07 January 2026 00:24:02 +0000 (0:00:00.338) 0:00:04.179 ***** 2026-01-07 00:25:51.899805 | orchestrator | changed: [testbed-manager] 2026-01-07 00:25:51.899822 | orchestrator | 2026-01-07 00:25:51.899839 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-07 00:25:51.899861 | orchestrator | Wednesday 07 January 2026 00:24:03 +0000 (0:00:00.926) 0:00:05.105 ***** 2026-01-07 00:25:51.899877 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-07 00:25:51.899895 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:51.899944 | orchestrator | 2026-01-07 00:25:51.899955 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-07 00:25:51.899965 | orchestrator | Wednesday 07 January 2026 00:24:39 +0000 (0:00:35.955) 0:00:41.061 ***** 2026-01-07 00:25:51.899975 | orchestrator | changed: [testbed-manager] 2026-01-07 00:25:51.899984 | orchestrator | 2026-01-07 00:25:51.899994 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-07 00:25:51.900004 | orchestrator | Wednesday 07 January 2026 00:24:50 +0000 (0:00:11.762) 0:00:52.823 ***** 2026-01-07 00:25:51.900013 | orchestrator | Pausing for 60 seconds 2026-01-07 00:25:51.900024 | orchestrator | changed: [testbed-manager] 2026-01-07 00:25:51.900034 | orchestrator | 2026-01-07 00:25:51.900048 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-07 00:25:51.900069 | orchestrator | Wednesday 07 January 2026 00:25:50 +0000 (0:01:00.112) 0:01:52.936 ***** 2026-01-07 00:25:51.900091 | orchestrator | ok: [testbed-manager] 2026-01-07 00:25:51.900105 | orchestrator | 2026-01-07 00:25:51.900120 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-07 00:25:51.900135 | orchestrator | Wednesday 07 January 2026 00:25:50 +0000 (0:00:00.073) 0:01:53.010 ***** 2026-01-07 00:25:51.900150 | orchestrator | changed: [testbed-manager] 2026-01-07 00:25:51.900165 | orchestrator | 2026-01-07 00:25:51.900178 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:25:51.900193 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:25:51.900209 | orchestrator | 2026-01-07 00:25:51.900224 | orchestrator | 2026-01-07 00:25:51.900241 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:25:51.900258 | orchestrator | Wednesday 07 January 2026 00:25:51 +0000 (0:00:00.639) 0:01:53.649 ***** 2026-01-07 00:25:51.900274 | orchestrator | =============================================================================== 2026-01-07 00:25:51.900303 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.11s 2026-01-07 00:25:51.900314 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 35.96s 2026-01-07 00:25:51.900323 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.76s 2026-01-07 00:25:51.900333 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.40s 2026-01-07 00:25:51.900343 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.15s 2026-01-07 00:25:51.900353 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.06s 2026-01-07 00:25:51.900363 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.93s 2026-01-07 00:25:51.900372 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2026-01-07 00:25:51.900382 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2026-01-07 00:25:51.900391 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-01-07 00:25:51.900401 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-01-07 00:25:52.216027 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-07 00:25:52.217269 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-07 00:25:52.276941 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-07 00:25:52.277031 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-01-07 00:25:52.283003 | orchestrator | + set -e 2026-01-07 00:25:52.283341 | orchestrator | + NAMESPACE=kolla/release 2026-01-07 00:25:52.283367 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-07 00:25:52.289843 | orchestrator | ++ semver 9.5.0 9.0.0 2026-01-07 00:25:52.363039 | orchestrator | + [[ 1 -lt 0 ]] 2026-01-07 00:25:52.363801 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-07 00:26:04.364513 | orchestrator | 2026-01-07 00:26:04 | INFO  | Task c4c07b11-f96c-4327-aabc-c10eb1173815 (operator) was prepared for execution. 2026-01-07 00:26:04.364714 | orchestrator | 2026-01-07 00:26:04 | INFO  | It takes a moment until task c4c07b11-f96c-4327-aabc-c10eb1173815 (operator) has been started and output is visible here. 2026-01-07 00:26:20.853282 | orchestrator | 2026-01-07 00:26:20.853514 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-07 00:26:20.853549 | orchestrator | 2026-01-07 00:26:20.853571 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 00:26:20.853592 | orchestrator | Wednesday 07 January 2026 00:26:08 +0000 (0:00:00.103) 0:00:00.103 ***** 2026-01-07 00:26:20.853613 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:26:20.853658 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:26:20.853679 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:26:20.853698 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:26:20.853717 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:26:20.853815 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:26:20.853840 | orchestrator | 2026-01-07 00:26:20.853860 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-07 00:26:20.853882 | orchestrator | Wednesday 07 January 2026 00:26:12 +0000 (0:00:04.331) 0:00:04.434 ***** 2026-01-07 00:26:20.853902 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:26:20.853922 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:26:20.853941 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:26:20.853959 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:26:20.853979 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:26:20.853998 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:26:20.854090 | orchestrator | 2026-01-07 00:26:20.854118 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-07 00:26:20.854138 | orchestrator | 2026-01-07 00:26:20.854157 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-07 00:26:20.854178 | orchestrator | Wednesday 07 January 2026 00:26:13 +0000 (0:00:00.784) 0:00:05.219 ***** 2026-01-07 00:26:20.854196 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:26:20.854215 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:26:20.854234 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:26:20.854255 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:26:20.854298 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:26:20.854318 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:26:20.854337 | orchestrator | 2026-01-07 00:26:20.854355 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-07 00:26:20.854375 | orchestrator | Wednesday 07 January 2026 00:26:13 +0000 (0:00:00.200) 0:00:05.420 ***** 2026-01-07 00:26:20.854395 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:26:20.854414 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:26:20.854433 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:26:20.854452 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:26:20.854471 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:26:20.854490 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:26:20.854510 | orchestrator | 2026-01-07 00:26:20.854530 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-07 00:26:20.854549 | orchestrator | Wednesday 07 January 2026 00:26:13 +0000 (0:00:00.188) 0:00:05.609 ***** 2026-01-07 00:26:20.854568 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:26:20.854766 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:26:20.854791 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:26:20.854810 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:26:20.854829 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:26:20.854848 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:26:20.854866 | orchestrator | 2026-01-07 00:26:20.854885 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-07 00:26:20.854904 | orchestrator | Wednesday 07 January 2026 00:26:14 +0000 (0:00:00.620) 0:00:06.230 ***** 2026-01-07 00:26:20.854923 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:26:20.854942 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:26:20.854960 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:26:20.854978 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:26:20.855028 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:26:20.855049 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:26:20.855067 | orchestrator | 2026-01-07 00:26:20.855085 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-07 00:26:20.855104 | orchestrator | Wednesday 07 January 2026 00:26:14 +0000 (0:00:00.772) 0:00:07.002 ***** 2026-01-07 00:26:20.855123 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-07 00:26:20.855143 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-07 00:26:20.855161 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-07 00:26:20.855179 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-07 00:26:20.855197 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-07 00:26:20.855215 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-07 00:26:20.855234 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-07 00:26:20.855253 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-07 00:26:20.855271 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-07 00:26:20.855289 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-07 00:26:20.855308 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-07 00:26:20.855328 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-07 00:26:20.855347 | orchestrator | 2026-01-07 00:26:20.855365 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-07 00:26:20.855384 | orchestrator | Wednesday 07 January 2026 00:26:16 +0000 (0:00:01.188) 0:00:08.190 ***** 2026-01-07 00:26:20.855409 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:26:20.855436 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:26:20.855451 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:26:20.855531 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:26:20.855553 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:26:20.855568 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:26:20.855583 | orchestrator | 2026-01-07 00:26:20.855601 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-07 00:26:20.855619 | orchestrator | Wednesday 07 January 2026 00:26:17 +0000 (0:00:01.232) 0:00:09.423 ***** 2026-01-07 00:26:20.855666 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-07 00:26:20.855684 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-07 00:26:20.855701 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-07 00:26:20.855718 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:26:20.855768 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:26:20.855889 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:26:20.855914 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:26:20.855931 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:26:20.855947 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-07 00:26:20.855957 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-07 00:26:20.855967 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-07 00:26:20.855976 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-07 00:26:20.855986 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-07 00:26:20.855996 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-07 00:26:20.856005 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-07 00:26:20.856014 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:26:20.856029 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:26:20.856045 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:26:20.856086 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:26:20.856105 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:26:20.856120 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-07 00:26:20.856135 | orchestrator | 2026-01-07 00:26:20.856150 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-07 00:26:20.856166 | orchestrator | Wednesday 07 January 2026 00:26:18 +0000 (0:00:01.259) 0:00:10.683 ***** 2026-01-07 00:26:20.856180 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:26:20.856196 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:26:20.856211 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:26:20.856229 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:26:20.856244 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:26:20.856342 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:26:20.856366 | orchestrator | 2026-01-07 00:26:20.856381 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-07 00:26:20.856399 | orchestrator | Wednesday 07 January 2026 00:26:18 +0000 (0:00:00.144) 0:00:10.828 ***** 2026-01-07 00:26:20.856416 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:26:20.856432 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:26:20.856447 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:26:20.856457 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:26:20.856466 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:26:20.856476 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:26:20.856485 | orchestrator | 2026-01-07 00:26:20.856495 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-07 00:26:20.856505 | orchestrator | Wednesday 07 January 2026 00:26:18 +0000 (0:00:00.163) 0:00:10.992 ***** 2026-01-07 00:26:20.856514 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:26:20.856524 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:26:20.856533 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:26:20.856543 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:26:20.856552 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:26:20.856561 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:26:20.856571 | orchestrator | 2026-01-07 00:26:20.856581 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-07 00:26:20.856590 | orchestrator | Wednesday 07 January 2026 00:26:19 +0000 (0:00:00.615) 0:00:11.607 ***** 2026-01-07 00:26:20.856600 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:26:20.856609 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:26:20.856618 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:26:20.856657 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:26:20.856668 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:26:20.856694 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:26:20.856704 | orchestrator | 2026-01-07 00:26:20.856713 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-07 00:26:20.856723 | orchestrator | Wednesday 07 January 2026 00:26:19 +0000 (0:00:00.190) 0:00:11.798 ***** 2026-01-07 00:26:20.856733 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 00:26:20.856743 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:26:20.856752 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 00:26:20.856762 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 00:26:20.856771 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:26:20.856781 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:26:20.856790 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 00:26:20.856800 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:26:20.856809 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-07 00:26:20.856819 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-07 00:26:20.856828 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:26:20.856838 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:26:20.856847 | orchestrator | 2026-01-07 00:26:20.856857 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-07 00:26:20.856877 | orchestrator | Wednesday 07 January 2026 00:26:20 +0000 (0:00:00.779) 0:00:12.577 ***** 2026-01-07 00:26:20.856887 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:26:20.856896 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:26:20.856905 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:26:20.856915 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:26:20.856924 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:26:20.856934 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:26:20.856943 | orchestrator | 2026-01-07 00:26:20.856953 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-07 00:26:20.856963 | orchestrator | Wednesday 07 January 2026 00:26:20 +0000 (0:00:00.140) 0:00:12.718 ***** 2026-01-07 00:26:20.856972 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:26:20.856982 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:26:20.856991 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:26:20.857001 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:26:20.857025 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:26:22.221173 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:26:22.221280 | orchestrator | 2026-01-07 00:26:22.221297 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-07 00:26:22.221310 | orchestrator | Wednesday 07 January 2026 00:26:20 +0000 (0:00:00.143) 0:00:12.861 ***** 2026-01-07 00:26:22.221322 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:26:22.221333 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:26:22.221344 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:26:22.221355 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:26:22.221365 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:26:22.221376 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:26:22.221387 | orchestrator | 2026-01-07 00:26:22.221398 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-07 00:26:22.221409 | orchestrator | Wednesday 07 January 2026 00:26:21 +0000 (0:00:00.157) 0:00:13.019 ***** 2026-01-07 00:26:22.221419 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:26:22.221430 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:26:22.221441 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:26:22.221452 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:26:22.221462 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:26:22.221473 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:26:22.221484 | orchestrator | 2026-01-07 00:26:22.221494 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-07 00:26:22.221506 | orchestrator | Wednesday 07 January 2026 00:26:21 +0000 (0:00:00.711) 0:00:13.731 ***** 2026-01-07 00:26:22.221518 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:26:22.221528 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:26:22.221539 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:26:22.221550 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:26:22.221579 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:26:22.221590 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:26:22.221601 | orchestrator | 2026-01-07 00:26:22.221612 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:26:22.221702 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:26:22.221717 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:26:22.221730 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:26:22.221744 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:26:22.221778 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:26:22.221791 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 00:26:22.221804 | orchestrator | 2026-01-07 00:26:22.221817 | orchestrator | 2026-01-07 00:26:22.221829 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:26:22.221843 | orchestrator | Wednesday 07 January 2026 00:26:21 +0000 (0:00:00.253) 0:00:13.985 ***** 2026-01-07 00:26:22.221855 | orchestrator | =============================================================================== 2026-01-07 00:26:22.221868 | orchestrator | Gathering Facts --------------------------------------------------------- 4.33s 2026-01-07 00:26:22.221881 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2026-01-07 00:26:22.221894 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.23s 2026-01-07 00:26:22.221907 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2026-01-07 00:26:22.221920 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2026-01-07 00:26:22.221933 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.78s 2026-01-07 00:26:22.221945 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.77s 2026-01-07 00:26:22.221958 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.71s 2026-01-07 00:26:22.221970 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.62s 2026-01-07 00:26:22.221982 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.62s 2026-01-07 00:26:22.221995 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-01-07 00:26:22.222007 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2026-01-07 00:26:22.222082 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2026-01-07 00:26:22.222096 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2026-01-07 00:26:22.222109 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-01-07 00:26:22.222120 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-01-07 00:26:22.222131 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-01-07 00:26:22.222142 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2026-01-07 00:26:22.222153 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2026-01-07 00:26:22.600224 | orchestrator | + osism apply --environment custom facts 2026-01-07 00:26:24.578176 | orchestrator | 2026-01-07 00:26:24 | INFO  | Trying to run play facts in environment custom 2026-01-07 00:26:34.660055 | orchestrator | 2026-01-07 00:26:34 | INFO  | Task c6086548-9254-4803-83e8-e4452a1dd033 (facts) was prepared for execution. 2026-01-07 00:26:34.660195 | orchestrator | 2026-01-07 00:26:34 | INFO  | It takes a moment until task c6086548-9254-4803-83e8-e4452a1dd033 (facts) has been started and output is visible here. 2026-01-07 00:27:19.256715 | orchestrator | 2026-01-07 00:27:19.256838 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-07 00:27:19.256857 | orchestrator | 2026-01-07 00:27:19.256870 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-07 00:27:19.256882 | orchestrator | Wednesday 07 January 2026 00:26:38 +0000 (0:00:00.080) 0:00:00.080 ***** 2026-01-07 00:27:19.256893 | orchestrator | ok: [testbed-manager] 2026-01-07 00:27:19.256905 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:27:19.256917 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:27:19.256928 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:27:19.256961 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:27:19.256973 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:27:19.256984 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:27:19.256994 | orchestrator | 2026-01-07 00:27:19.257005 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-07 00:27:19.257016 | orchestrator | Wednesday 07 January 2026 00:26:39 +0000 (0:00:01.371) 0:00:01.451 ***** 2026-01-07 00:27:19.257027 | orchestrator | ok: [testbed-manager] 2026-01-07 00:27:19.257038 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:27:19.257049 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:27:19.257060 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:27:19.257070 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:27:19.257081 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:27:19.257093 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:27:19.257112 | orchestrator | 2026-01-07 00:27:19.257130 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-07 00:27:19.257149 | orchestrator | 2026-01-07 00:27:19.257167 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-07 00:27:19.257185 | orchestrator | Wednesday 07 January 2026 00:26:41 +0000 (0:00:01.232) 0:00:02.683 ***** 2026-01-07 00:27:19.257203 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:19.257221 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:19.257241 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:19.257259 | orchestrator | 2026-01-07 00:27:19.257279 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-07 00:27:19.257300 | orchestrator | Wednesday 07 January 2026 00:26:41 +0000 (0:00:00.085) 0:00:02.769 ***** 2026-01-07 00:27:19.257319 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:19.257337 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:19.257351 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:19.257364 | orchestrator | 2026-01-07 00:27:19.257376 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-07 00:27:19.257393 | orchestrator | Wednesday 07 January 2026 00:26:41 +0000 (0:00:00.236) 0:00:03.005 ***** 2026-01-07 00:27:19.257411 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:19.257428 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:19.257449 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:19.257468 | orchestrator | 2026-01-07 00:27:19.257488 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-07 00:27:19.257508 | orchestrator | Wednesday 07 January 2026 00:26:41 +0000 (0:00:00.217) 0:00:03.223 ***** 2026-01-07 00:27:19.257530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:27:19.257549 | orchestrator | 2026-01-07 00:27:19.257567 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-07 00:27:19.257583 | orchestrator | Wednesday 07 January 2026 00:26:41 +0000 (0:00:00.137) 0:00:03.360 ***** 2026-01-07 00:27:19.257598 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:19.257615 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:19.257686 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:19.257702 | orchestrator | 2026-01-07 00:27:19.257718 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-07 00:27:19.257737 | orchestrator | Wednesday 07 January 2026 00:26:42 +0000 (0:00:00.439) 0:00:03.800 ***** 2026-01-07 00:27:19.257755 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:27:19.257772 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:27:19.257790 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:27:19.257807 | orchestrator | 2026-01-07 00:27:19.257824 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-07 00:27:19.257842 | orchestrator | Wednesday 07 January 2026 00:26:42 +0000 (0:00:00.141) 0:00:03.941 ***** 2026-01-07 00:27:19.257859 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:27:19.257878 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:27:19.257913 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:27:19.257931 | orchestrator | 2026-01-07 00:27:19.257950 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-07 00:27:19.257968 | orchestrator | Wednesday 07 January 2026 00:26:43 +0000 (0:00:01.064) 0:00:05.005 ***** 2026-01-07 00:27:19.257987 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:19.258005 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:19.258101 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:19.258124 | orchestrator | 2026-01-07 00:27:19.258142 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-07 00:27:19.258223 | orchestrator | Wednesday 07 January 2026 00:26:44 +0000 (0:00:00.487) 0:00:05.493 ***** 2026-01-07 00:27:19.258238 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:27:19.258248 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:27:19.258259 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:27:19.258270 | orchestrator | 2026-01-07 00:27:19.258281 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-07 00:27:19.258292 | orchestrator | Wednesday 07 January 2026 00:26:45 +0000 (0:00:01.081) 0:00:06.574 ***** 2026-01-07 00:27:19.258303 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:27:19.258378 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:27:19.258389 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:27:19.258400 | orchestrator | 2026-01-07 00:27:19.258411 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-07 00:27:19.258422 | orchestrator | Wednesday 07 January 2026 00:27:02 +0000 (0:00:16.933) 0:00:23.508 ***** 2026-01-07 00:27:19.258433 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:27:19.258444 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:27:19.258455 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:27:19.258466 | orchestrator | 2026-01-07 00:27:19.258477 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-07 00:27:19.258515 | orchestrator | Wednesday 07 January 2026 00:27:02 +0000 (0:00:00.086) 0:00:23.594 ***** 2026-01-07 00:27:19.258527 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:27:19.258538 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:27:19.258548 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:27:19.258559 | orchestrator | 2026-01-07 00:27:19.258570 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-07 00:27:19.258581 | orchestrator | Wednesday 07 January 2026 00:27:09 +0000 (0:00:07.783) 0:00:31.378 ***** 2026-01-07 00:27:19.258592 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:19.258603 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:19.258613 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:19.258655 | orchestrator | 2026-01-07 00:27:19.258669 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-07 00:27:19.258680 | orchestrator | Wednesday 07 January 2026 00:27:10 +0000 (0:00:00.459) 0:00:31.837 ***** 2026-01-07 00:27:19.258691 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-07 00:27:19.258709 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-07 00:27:19.258720 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-07 00:27:19.258730 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-07 00:27:19.258741 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-07 00:27:19.258751 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-07 00:27:19.258761 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-07 00:27:19.258772 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-07 00:27:19.258782 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-07 00:27:19.258793 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-07 00:27:19.258804 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-07 00:27:19.258814 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-07 00:27:19.258837 | orchestrator | 2026-01-07 00:27:19.258848 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-07 00:27:19.258858 | orchestrator | Wednesday 07 January 2026 00:27:14 +0000 (0:00:03.714) 0:00:35.551 ***** 2026-01-07 00:27:19.258869 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:19.258880 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:19.258890 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:19.258901 | orchestrator | 2026-01-07 00:27:19.258911 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-07 00:27:19.258922 | orchestrator | 2026-01-07 00:27:19.258933 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:27:19.258943 | orchestrator | Wednesday 07 January 2026 00:27:15 +0000 (0:00:01.350) 0:00:36.902 ***** 2026-01-07 00:27:19.258954 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:27:19.258965 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:27:19.258976 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:27:19.258986 | orchestrator | ok: [testbed-manager] 2026-01-07 00:27:19.258997 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:19.259007 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:19.259018 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:19.259028 | orchestrator | 2026-01-07 00:27:19.259039 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:27:19.259064 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:27:19.259076 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:27:19.259088 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:27:19.259099 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:27:19.259109 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:27:19.259120 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:27:19.259131 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:27:19.259142 | orchestrator | 2026-01-07 00:27:19.259152 | orchestrator | 2026-01-07 00:27:19.259163 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:27:19.259174 | orchestrator | Wednesday 07 January 2026 00:27:19 +0000 (0:00:03.790) 0:00:40.692 ***** 2026-01-07 00:27:19.259185 | orchestrator | =============================================================================== 2026-01-07 00:27:19.259195 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.93s 2026-01-07 00:27:19.259206 | orchestrator | Install required packages (Debian) -------------------------------------- 7.78s 2026-01-07 00:27:19.259217 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.79s 2026-01-07 00:27:19.259227 | orchestrator | Copy fact files --------------------------------------------------------- 3.71s 2026-01-07 00:27:19.259237 | orchestrator | Create custom facts directory ------------------------------------------- 1.37s 2026-01-07 00:27:19.259248 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.35s 2026-01-07 00:27:19.259267 | orchestrator | Copy fact file ---------------------------------------------------------- 1.23s 2026-01-07 00:27:19.477083 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.08s 2026-01-07 00:27:19.477175 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2026-01-07 00:27:19.477204 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.49s 2026-01-07 00:27:19.477208 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-01-07 00:27:19.477212 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2026-01-07 00:27:19.477217 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.24s 2026-01-07 00:27:19.477220 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-01-07 00:27:19.477224 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-01-07 00:27:19.477239 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-01-07 00:27:19.477244 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-01-07 00:27:19.477248 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-01-07 00:27:19.821524 | orchestrator | + osism apply bootstrap 2026-01-07 00:27:31.787206 | orchestrator | 2026-01-07 00:27:31 | INFO  | Task 9052a1cc-0b64-4228-b2c8-9dc839347eb4 (bootstrap) was prepared for execution. 2026-01-07 00:27:31.787308 | orchestrator | 2026-01-07 00:27:31 | INFO  | It takes a moment until task 9052a1cc-0b64-4228-b2c8-9dc839347eb4 (bootstrap) has been started and output is visible here. 2026-01-07 00:27:48.395346 | orchestrator | 2026-01-07 00:27:48.395465 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-07 00:27:48.395480 | orchestrator | 2026-01-07 00:27:48.395489 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-07 00:27:48.395498 | orchestrator | Wednesday 07 January 2026 00:27:36 +0000 (0:00:00.157) 0:00:00.157 ***** 2026-01-07 00:27:48.395507 | orchestrator | ok: [testbed-manager] 2026-01-07 00:27:48.395519 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:48.395533 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:48.395546 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:48.395560 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:27:48.395575 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:27:48.395603 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:27:48.395646 | orchestrator | 2026-01-07 00:27:48.395658 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-07 00:27:48.395668 | orchestrator | 2026-01-07 00:27:48.395677 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:27:48.395686 | orchestrator | Wednesday 07 January 2026 00:27:36 +0000 (0:00:00.264) 0:00:00.421 ***** 2026-01-07 00:27:48.395694 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:27:48.395706 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:27:48.395719 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:27:48.395736 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:48.395754 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:48.395767 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:48.395780 | orchestrator | ok: [testbed-manager] 2026-01-07 00:27:48.395792 | orchestrator | 2026-01-07 00:27:48.395807 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-07 00:27:48.395820 | orchestrator | 2026-01-07 00:27:48.395835 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:27:48.395849 | orchestrator | Wednesday 07 January 2026 00:27:40 +0000 (0:00:03.841) 0:00:04.263 ***** 2026-01-07 00:27:48.395861 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-07 00:27:48.395869 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-07 00:27:48.395878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-07 00:27:48.395888 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-07 00:27:48.395898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:27:48.395908 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:27:48.395917 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-07 00:27:48.395948 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:27:48.395958 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-07 00:27:48.395967 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-07 00:27:48.395976 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-07 00:27:48.395986 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-07 00:27:48.395995 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-07 00:27:48.396005 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-07 00:27:48.396014 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-07 00:27:48.396026 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-07 00:27:48.396039 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:27:48.396053 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-07 00:27:48.396068 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-07 00:27:48.396077 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-07 00:27:48.396087 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-07 00:27:48.396098 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-07 00:27:48.396112 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-07 00:27:48.396127 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-07 00:27:48.396138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-07 00:27:48.396147 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:27:48.396156 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-07 00:27:48.396165 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-07 00:27:48.396174 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-07 00:27:48.396184 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-07 00:27:48.396193 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-07 00:27:48.396202 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-07 00:27:48.396210 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-07 00:27:48.396218 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-07 00:27:48.396226 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-07 00:27:48.396234 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:27:48.396242 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:27:48.396250 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-07 00:27:48.396257 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-07 00:27:48.396265 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:27:48.396273 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-07 00:27:48.396282 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-07 00:27:48.396304 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-07 00:27:48.396321 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:27:48.396330 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:27:48.396339 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-07 00:27:48.396366 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-07 00:27:48.396376 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-07 00:27:48.396390 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:27:48.396403 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-07 00:27:48.396430 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:27:48.396440 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-07 00:27:48.396448 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-07 00:27:48.396464 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-07 00:27:48.396472 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-07 00:27:48.396479 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:27:48.396487 | orchestrator | 2026-01-07 00:27:48.396495 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-07 00:27:48.396503 | orchestrator | 2026-01-07 00:27:48.396511 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-07 00:27:48.396519 | orchestrator | Wednesday 07 January 2026 00:27:40 +0000 (0:00:00.490) 0:00:04.753 ***** 2026-01-07 00:27:48.396527 | orchestrator | ok: [testbed-manager] 2026-01-07 00:27:48.396535 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:48.396543 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:48.396551 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:27:48.396558 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:27:48.396566 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:48.396574 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:27:48.396582 | orchestrator | 2026-01-07 00:27:48.396590 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-07 00:27:48.396598 | orchestrator | Wednesday 07 January 2026 00:27:42 +0000 (0:00:01.373) 0:00:06.127 ***** 2026-01-07 00:27:48.396607 | orchestrator | ok: [testbed-manager] 2026-01-07 00:27:48.396672 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:27:48.396687 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:27:48.396700 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:27:48.396709 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:27:48.396716 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:27:48.396724 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:27:48.396732 | orchestrator | 2026-01-07 00:27:48.396740 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-07 00:27:48.396748 | orchestrator | Wednesday 07 January 2026 00:27:43 +0000 (0:00:01.292) 0:00:07.419 ***** 2026-01-07 00:27:48.396757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:27:48.396767 | orchestrator | 2026-01-07 00:27:48.396775 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-07 00:27:48.396783 | orchestrator | Wednesday 07 January 2026 00:27:43 +0000 (0:00:00.319) 0:00:07.739 ***** 2026-01-07 00:27:48.396791 | orchestrator | changed: [testbed-manager] 2026-01-07 00:27:48.396799 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:27:48.396806 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:27:48.396814 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:27:48.396822 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:27:48.396830 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:27:48.396837 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:27:48.396845 | orchestrator | 2026-01-07 00:27:48.396853 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-07 00:27:48.396861 | orchestrator | Wednesday 07 January 2026 00:27:45 +0000 (0:00:02.055) 0:00:09.795 ***** 2026-01-07 00:27:48.396869 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:27:48.396878 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:27:48.396888 | orchestrator | 2026-01-07 00:27:48.396896 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-07 00:27:48.396904 | orchestrator | Wednesday 07 January 2026 00:27:45 +0000 (0:00:00.260) 0:00:10.055 ***** 2026-01-07 00:27:48.396912 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:27:48.396920 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:27:48.396928 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:27:48.396936 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:27:48.396955 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:27:48.396963 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:27:48.396971 | orchestrator | 2026-01-07 00:27:48.396979 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-07 00:27:48.396987 | orchestrator | Wednesday 07 January 2026 00:27:47 +0000 (0:00:01.060) 0:00:11.116 ***** 2026-01-07 00:27:48.396995 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:27:48.397002 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:27:48.397010 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:27:48.397018 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:27:48.397026 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:27:48.397034 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:27:48.397041 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:27:48.397049 | orchestrator | 2026-01-07 00:27:48.397057 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-07 00:27:48.397071 | orchestrator | Wednesday 07 January 2026 00:27:47 +0000 (0:00:00.736) 0:00:11.853 ***** 2026-01-07 00:27:48.397079 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:27:48.397087 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:27:48.397095 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:27:48.397102 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:27:48.397110 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:27:48.397118 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:27:48.397126 | orchestrator | ok: [testbed-manager] 2026-01-07 00:27:48.397134 | orchestrator | 2026-01-07 00:27:48.397141 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-07 00:27:48.397150 | orchestrator | Wednesday 07 January 2026 00:27:48 +0000 (0:00:00.433) 0:00:12.287 ***** 2026-01-07 00:27:48.397158 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:27:48.397166 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:27:48.397182 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:28:01.085668 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:28:01.085785 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:28:01.085802 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:28:01.085815 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:28:01.085827 | orchestrator | 2026-01-07 00:28:01.085840 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-07 00:28:01.085853 | orchestrator | Wednesday 07 January 2026 00:27:48 +0000 (0:00:00.225) 0:00:12.512 ***** 2026-01-07 00:28:01.085867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:28:01.085898 | orchestrator | 2026-01-07 00:28:01.085910 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-07 00:28:01.085922 | orchestrator | Wednesday 07 January 2026 00:27:48 +0000 (0:00:00.284) 0:00:12.797 ***** 2026-01-07 00:28:01.085933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:28:01.085945 | orchestrator | 2026-01-07 00:28:01.085956 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-07 00:28:01.085967 | orchestrator | Wednesday 07 January 2026 00:27:49 +0000 (0:00:00.290) 0:00:13.087 ***** 2026-01-07 00:28:01.085978 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:01.085991 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:01.086002 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:01.086014 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:01.086089 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:01.086100 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:01.086111 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:01.086125 | orchestrator | 2026-01-07 00:28:01.086166 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-07 00:28:01.086180 | orchestrator | Wednesday 07 January 2026 00:27:50 +0000 (0:00:01.557) 0:00:14.644 ***** 2026-01-07 00:28:01.086193 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:01.086207 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:28:01.086220 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:28:01.086233 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:28:01.086246 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:28:01.086259 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:28:01.086273 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:28:01.086287 | orchestrator | 2026-01-07 00:28:01.086301 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-07 00:28:01.086315 | orchestrator | Wednesday 07 January 2026 00:27:50 +0000 (0:00:00.255) 0:00:14.900 ***** 2026-01-07 00:28:01.086326 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:01.086337 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:01.086349 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:01.086359 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:01.086370 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:01.086381 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:01.086392 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:01.086403 | orchestrator | 2026-01-07 00:28:01.086414 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-07 00:28:01.086425 | orchestrator | Wednesday 07 January 2026 00:27:51 +0000 (0:00:00.605) 0:00:15.505 ***** 2026-01-07 00:28:01.086436 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:01.086447 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:28:01.086458 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:28:01.086469 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:28:01.086480 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:28:01.086491 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:28:01.086502 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:28:01.086513 | orchestrator | 2026-01-07 00:28:01.086525 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-07 00:28:01.086537 | orchestrator | Wednesday 07 January 2026 00:27:51 +0000 (0:00:00.402) 0:00:15.907 ***** 2026-01-07 00:28:01.086548 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:01.086559 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:01.086570 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:01.086581 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:01.086592 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:01.086603 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:01.086613 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:01.086681 | orchestrator | 2026-01-07 00:28:01.086693 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-07 00:28:01.086704 | orchestrator | Wednesday 07 January 2026 00:27:52 +0000 (0:00:00.582) 0:00:16.489 ***** 2026-01-07 00:28:01.086715 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:01.086726 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:01.086737 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:01.086748 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:01.086759 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:01.086770 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:01.086780 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:01.086791 | orchestrator | 2026-01-07 00:28:01.086811 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-07 00:28:01.086823 | orchestrator | Wednesday 07 January 2026 00:27:53 +0000 (0:00:01.290) 0:00:17.780 ***** 2026-01-07 00:28:01.086834 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:01.086845 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:01.086856 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:01.086866 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:01.086877 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:01.086888 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:01.086917 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:01.086928 | orchestrator | 2026-01-07 00:28:01.086939 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-07 00:28:01.086950 | orchestrator | Wednesday 07 January 2026 00:27:54 +0000 (0:00:01.060) 0:00:18.841 ***** 2026-01-07 00:28:01.086993 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:28:01.087006 | orchestrator | 2026-01-07 00:28:01.087017 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-07 00:28:01.087028 | orchestrator | Wednesday 07 January 2026 00:27:55 +0000 (0:00:00.315) 0:00:19.156 ***** 2026-01-07 00:28:01.087039 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:01.087050 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:01.087061 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:01.087071 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:01.087082 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:01.087093 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:01.087104 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:01.087115 | orchestrator | 2026-01-07 00:28:01.087126 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-07 00:28:01.087137 | orchestrator | Wednesday 07 January 2026 00:27:56 +0000 (0:00:01.262) 0:00:20.419 ***** 2026-01-07 00:28:01.087148 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:01.087158 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:01.087169 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:01.087180 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:01.087191 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:01.087202 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:01.087213 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:01.087224 | orchestrator | 2026-01-07 00:28:01.087235 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-07 00:28:01.087246 | orchestrator | Wednesday 07 January 2026 00:27:56 +0000 (0:00:00.263) 0:00:20.683 ***** 2026-01-07 00:28:01.087256 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:01.087267 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:01.087278 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:01.087288 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:01.087299 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:01.087310 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:01.087320 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:01.087331 | orchestrator | 2026-01-07 00:28:01.087342 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-07 00:28:01.087353 | orchestrator | Wednesday 07 January 2026 00:27:56 +0000 (0:00:00.236) 0:00:20.919 ***** 2026-01-07 00:28:01.087364 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:01.087375 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:01.087386 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:01.087397 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:01.087407 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:01.087418 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:01.087429 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:01.087440 | orchestrator | 2026-01-07 00:28:01.087451 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-07 00:28:01.087462 | orchestrator | Wednesday 07 January 2026 00:27:57 +0000 (0:00:00.233) 0:00:21.153 ***** 2026-01-07 00:28:01.087474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:28:01.087487 | orchestrator | 2026-01-07 00:28:01.087498 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-07 00:28:01.087508 | orchestrator | Wednesday 07 January 2026 00:27:57 +0000 (0:00:00.295) 0:00:21.448 ***** 2026-01-07 00:28:01.087527 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:01.087538 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:01.087549 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:01.087560 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:01.087570 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:01.087581 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:01.087592 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:01.087602 | orchestrator | 2026-01-07 00:28:01.087613 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-07 00:28:01.087643 | orchestrator | Wednesday 07 January 2026 00:27:57 +0000 (0:00:00.550) 0:00:21.999 ***** 2026-01-07 00:28:01.087654 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:01.087665 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:28:01.087676 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:28:01.087687 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:28:01.087697 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:28:01.087708 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:28:01.087719 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:28:01.087730 | orchestrator | 2026-01-07 00:28:01.087741 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-07 00:28:01.087752 | orchestrator | Wednesday 07 January 2026 00:27:58 +0000 (0:00:00.224) 0:00:22.224 ***** 2026-01-07 00:28:01.087775 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:01.087787 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:01.087798 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:01.087808 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:01.087819 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:01.087830 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:01.087841 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:01.087852 | orchestrator | 2026-01-07 00:28:01.087863 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-07 00:28:01.087875 | orchestrator | Wednesday 07 January 2026 00:27:59 +0000 (0:00:01.113) 0:00:23.337 ***** 2026-01-07 00:28:01.087886 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:01.087897 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:01.087908 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:01.087918 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:01.087929 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:01.087948 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:01.087959 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:01.087970 | orchestrator | 2026-01-07 00:28:01.087981 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-07 00:28:01.087992 | orchestrator | Wednesday 07 January 2026 00:27:59 +0000 (0:00:00.593) 0:00:23.931 ***** 2026-01-07 00:28:01.088003 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:01.088014 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:01.088025 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:01.088036 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:01.088055 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:42.730456 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:42.730555 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:42.730565 | orchestrator | 2026-01-07 00:28:42.730573 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-07 00:28:42.730582 | orchestrator | Wednesday 07 January 2026 00:28:01 +0000 (0:00:01.211) 0:00:25.142 ***** 2026-01-07 00:28:42.730589 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:42.730597 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:42.730604 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:42.730611 | orchestrator | changed: [testbed-manager] 2026-01-07 00:28:42.730664 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:42.730672 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:42.730683 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:42.730695 | orchestrator | 2026-01-07 00:28:42.730707 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-07 00:28:42.730743 | orchestrator | Wednesday 07 January 2026 00:28:17 +0000 (0:00:16.309) 0:00:41.452 ***** 2026-01-07 00:28:42.730756 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:42.730766 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:42.730777 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:42.730789 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:42.730796 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:42.730803 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:42.730810 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:42.730816 | orchestrator | 2026-01-07 00:28:42.730823 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-07 00:28:42.730830 | orchestrator | Wednesday 07 January 2026 00:28:17 +0000 (0:00:00.236) 0:00:41.689 ***** 2026-01-07 00:28:42.730836 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:42.730843 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:42.730850 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:42.730857 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:42.730863 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:42.730870 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:42.730877 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:42.730883 | orchestrator | 2026-01-07 00:28:42.730890 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-07 00:28:42.730897 | orchestrator | Wednesday 07 January 2026 00:28:17 +0000 (0:00:00.244) 0:00:41.934 ***** 2026-01-07 00:28:42.730903 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:42.730910 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:42.730917 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:42.730923 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:42.730930 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:42.730937 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:42.730943 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:42.730950 | orchestrator | 2026-01-07 00:28:42.730957 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-07 00:28:42.730963 | orchestrator | Wednesday 07 January 2026 00:28:18 +0000 (0:00:00.236) 0:00:42.170 ***** 2026-01-07 00:28:42.730973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:28:42.730981 | orchestrator | 2026-01-07 00:28:42.730988 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-07 00:28:42.730995 | orchestrator | Wednesday 07 January 2026 00:28:18 +0000 (0:00:00.279) 0:00:42.450 ***** 2026-01-07 00:28:42.731001 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:42.731008 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:42.731014 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:42.731021 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:42.731027 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:42.731034 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:42.731041 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:42.731047 | orchestrator | 2026-01-07 00:28:42.731054 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-07 00:28:42.731061 | orchestrator | Wednesday 07 January 2026 00:28:20 +0000 (0:00:01.847) 0:00:44.298 ***** 2026-01-07 00:28:42.731067 | orchestrator | changed: [testbed-manager] 2026-01-07 00:28:42.731074 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:42.731081 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:42.731087 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:42.731094 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:42.731101 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:42.731107 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:42.731114 | orchestrator | 2026-01-07 00:28:42.731120 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-07 00:28:42.731127 | orchestrator | Wednesday 07 January 2026 00:28:21 +0000 (0:00:01.145) 0:00:45.444 ***** 2026-01-07 00:28:42.731140 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:42.731146 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:42.731153 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:42.731160 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:42.731166 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:42.731173 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:42.731180 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:42.731186 | orchestrator | 2026-01-07 00:28:42.731193 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-07 00:28:42.731199 | orchestrator | Wednesday 07 January 2026 00:28:22 +0000 (0:00:00.860) 0:00:46.304 ***** 2026-01-07 00:28:42.731220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:28:42.731229 | orchestrator | 2026-01-07 00:28:42.731235 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-07 00:28:42.731243 | orchestrator | Wednesday 07 January 2026 00:28:22 +0000 (0:00:00.334) 0:00:46.639 ***** 2026-01-07 00:28:42.731250 | orchestrator | changed: [testbed-manager] 2026-01-07 00:28:42.731257 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:42.731263 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:42.731270 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:42.731277 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:42.731283 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:42.731290 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:42.731296 | orchestrator | 2026-01-07 00:28:42.731318 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-07 00:28:42.731325 | orchestrator | Wednesday 07 January 2026 00:28:23 +0000 (0:00:01.045) 0:00:47.685 ***** 2026-01-07 00:28:42.731332 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:28:42.731339 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:28:42.731345 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:28:42.731352 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:28:42.731358 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:28:42.731365 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:28:42.731372 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:28:42.731378 | orchestrator | 2026-01-07 00:28:42.731385 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-07 00:28:42.731392 | orchestrator | Wednesday 07 January 2026 00:28:23 +0000 (0:00:00.237) 0:00:47.922 ***** 2026-01-07 00:28:42.731399 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:28:42.731405 | orchestrator | 2026-01-07 00:28:42.731412 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-07 00:28:42.731419 | orchestrator | Wednesday 07 January 2026 00:28:24 +0000 (0:00:00.314) 0:00:48.236 ***** 2026-01-07 00:28:42.731425 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:42.731432 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:42.731439 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:42.731445 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:42.731452 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:42.731458 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:42.731465 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:42.731475 | orchestrator | 2026-01-07 00:28:42.731486 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-07 00:28:42.731497 | orchestrator | Wednesday 07 January 2026 00:28:25 +0000 (0:00:01.791) 0:00:50.028 ***** 2026-01-07 00:28:42.731507 | orchestrator | changed: [testbed-manager] 2026-01-07 00:28:42.731517 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:42.731526 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:42.731537 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:42.731548 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:42.731568 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:42.731577 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:42.731584 | orchestrator | 2026-01-07 00:28:42.731590 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-07 00:28:42.731597 | orchestrator | Wednesday 07 January 2026 00:28:27 +0000 (0:00:01.207) 0:00:51.236 ***** 2026-01-07 00:28:42.731604 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:28:42.731610 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:28:42.731649 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:28:42.731657 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:28:42.731663 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:28:42.731670 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:28:42.731676 | orchestrator | changed: [testbed-manager] 2026-01-07 00:28:42.731683 | orchestrator | 2026-01-07 00:28:42.731690 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-07 00:28:42.731696 | orchestrator | Wednesday 07 January 2026 00:28:39 +0000 (0:00:12.179) 0:01:03.415 ***** 2026-01-07 00:28:42.731703 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:42.731710 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:42.731716 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:42.731723 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:42.731729 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:42.731736 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:42.731743 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:42.731749 | orchestrator | 2026-01-07 00:28:42.731756 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-07 00:28:42.731762 | orchestrator | Wednesday 07 January 2026 00:28:40 +0000 (0:00:01.323) 0:01:04.739 ***** 2026-01-07 00:28:42.731769 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:42.731775 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:42.731782 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:42.731788 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:42.731795 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:42.731801 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:42.731808 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:42.731814 | orchestrator | 2026-01-07 00:28:42.731821 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-07 00:28:42.731828 | orchestrator | Wednesday 07 January 2026 00:28:41 +0000 (0:00:01.161) 0:01:05.901 ***** 2026-01-07 00:28:42.731834 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:42.731841 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:42.731847 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:42.731854 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:42.731860 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:42.731867 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:42.731873 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:42.731880 | orchestrator | 2026-01-07 00:28:42.731887 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-07 00:28:42.731893 | orchestrator | Wednesday 07 January 2026 00:28:42 +0000 (0:00:00.267) 0:01:06.168 ***** 2026-01-07 00:28:42.731900 | orchestrator | ok: [testbed-manager] 2026-01-07 00:28:42.731912 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:28:42.731919 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:28:42.731925 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:28:42.731932 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:28:42.731938 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:28:42.731945 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:28:42.731951 | orchestrator | 2026-01-07 00:28:42.731958 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-07 00:28:42.731965 | orchestrator | Wednesday 07 January 2026 00:28:42 +0000 (0:00:00.304) 0:01:06.473 ***** 2026-01-07 00:28:42.731972 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:28:42.731984 | orchestrator | 2026-01-07 00:28:42.731997 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-07 00:31:05.408741 | orchestrator | Wednesday 07 January 2026 00:28:42 +0000 (0:00:00.314) 0:01:06.787 ***** 2026-01-07 00:31:05.408849 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:05.408868 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:05.408880 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:05.408892 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:05.408903 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:05.408914 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:05.408924 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:05.408936 | orchestrator | 2026-01-07 00:31:05.408948 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-07 00:31:05.408959 | orchestrator | Wednesday 07 January 2026 00:28:44 +0000 (0:00:01.956) 0:01:08.744 ***** 2026-01-07 00:31:05.408970 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:05.408982 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:05.408993 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:05.409004 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:05.409015 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:05.409025 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:05.409036 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:05.409047 | orchestrator | 2026-01-07 00:31:05.409058 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-07 00:31:05.409071 | orchestrator | Wednesday 07 January 2026 00:28:45 +0000 (0:00:00.686) 0:01:09.430 ***** 2026-01-07 00:31:05.409082 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:05.409092 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:05.409103 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:05.409114 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:05.409125 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:05.409136 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:05.409147 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:05.409158 | orchestrator | 2026-01-07 00:31:05.409170 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-07 00:31:05.409181 | orchestrator | Wednesday 07 January 2026 00:28:45 +0000 (0:00:00.220) 0:01:09.650 ***** 2026-01-07 00:31:05.409192 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:05.409202 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:05.409214 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:05.409226 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:05.409239 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:05.409253 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:05.409266 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:05.409279 | orchestrator | 2026-01-07 00:31:05.409292 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-07 00:31:05.409305 | orchestrator | Wednesday 07 January 2026 00:28:47 +0000 (0:00:01.507) 0:01:11.158 ***** 2026-01-07 00:31:05.409318 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:05.409332 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:05.409345 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:05.409358 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:05.409371 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:05.409384 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:05.409402 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:05.409415 | orchestrator | 2026-01-07 00:31:05.409427 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-07 00:31:05.409438 | orchestrator | Wednesday 07 January 2026 00:28:49 +0000 (0:00:02.252) 0:01:13.411 ***** 2026-01-07 00:31:05.409449 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:05.409460 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:05.409471 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:05.409482 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:05.409493 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:05.409504 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:05.409535 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:05.409546 | orchestrator | 2026-01-07 00:31:05.409557 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-07 00:31:05.409568 | orchestrator | Wednesday 07 January 2026 00:28:52 +0000 (0:00:02.720) 0:01:16.132 ***** 2026-01-07 00:31:05.409579 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:05.409590 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:05.409620 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:05.409632 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:05.409643 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:05.409653 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:05.409664 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:05.409675 | orchestrator | 2026-01-07 00:31:05.409685 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-07 00:31:05.409697 | orchestrator | Wednesday 07 January 2026 00:29:31 +0000 (0:00:39.672) 0:01:55.804 ***** 2026-01-07 00:31:05.409707 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:05.409718 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:05.409729 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:05.409740 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:05.409750 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:05.409761 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:05.409772 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:05.409782 | orchestrator | 2026-01-07 00:31:05.409793 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-07 00:31:05.409804 | orchestrator | Wednesday 07 January 2026 00:30:49 +0000 (0:01:17.897) 0:03:13.702 ***** 2026-01-07 00:31:05.409815 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:05.409826 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:05.409836 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:05.409847 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:05.409859 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:05.409869 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:05.409880 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:05.409891 | orchestrator | 2026-01-07 00:31:05.409902 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-07 00:31:05.409913 | orchestrator | Wednesday 07 January 2026 00:30:51 +0000 (0:00:02.312) 0:03:16.014 ***** 2026-01-07 00:31:05.409924 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:05.409934 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:05.409945 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:05.409956 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:05.409966 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:05.409977 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:05.409987 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:05.409998 | orchestrator | 2026-01-07 00:31:05.410009 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-07 00:31:05.410115 | orchestrator | Wednesday 07 January 2026 00:31:03 +0000 (0:00:11.226) 0:03:27.240 ***** 2026-01-07 00:31:05.410160 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-07 00:31:05.410194 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-07 00:31:05.410219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-07 00:31:05.410232 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-07 00:31:05.410243 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-07 00:31:05.410254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-07 00:31:05.410265 | orchestrator | 2026-01-07 00:31:05.410276 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-07 00:31:05.410287 | orchestrator | Wednesday 07 January 2026 00:31:03 +0000 (0:00:00.402) 0:03:27.643 ***** 2026-01-07 00:31:05.410298 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-07 00:31:05.410309 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:05.410320 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-07 00:31:05.410331 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-07 00:31:05.410342 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:05.410353 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-07 00:31:05.410363 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:05.410374 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:05.410385 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 00:31:05.410395 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 00:31:05.410406 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 00:31:05.410416 | orchestrator | 2026-01-07 00:31:05.410431 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-07 00:31:05.410443 | orchestrator | Wednesday 07 January 2026 00:31:05 +0000 (0:00:01.715) 0:03:29.358 ***** 2026-01-07 00:31:05.410453 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-07 00:31:05.410465 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-07 00:31:05.410476 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-07 00:31:05.410486 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-07 00:31:05.410497 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-07 00:31:05.410515 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-07 00:31:13.234169 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-07 00:31:13.234400 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-07 00:31:13.234420 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-07 00:31:13.234433 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-07 00:31:13.234444 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-07 00:31:13.234455 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-07 00:31:13.234465 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-07 00:31:13.234476 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-07 00:31:13.234487 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-07 00:31:13.234498 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-07 00:31:13.234509 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-07 00:31:13.234520 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-07 00:31:13.234531 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-07 00:31:13.234541 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-07 00:31:13.234554 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:13.234568 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-07 00:31:13.234581 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-07 00:31:13.234629 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-07 00:31:13.234647 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-07 00:31:13.234660 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-07 00:31:13.234672 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-07 00:31:13.234685 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-07 00:31:13.234697 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-07 00:31:13.234709 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-07 00:31:13.234722 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-07 00:31:13.234734 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-07 00:31:13.234747 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-07 00:31:13.234759 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:13.234771 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-07 00:31:13.234784 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-07 00:31:13.234796 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-07 00:31:13.234809 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-07 00:31:13.234821 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-07 00:31:13.234835 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-07 00:31:13.234857 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-07 00:31:13.234889 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-07 00:31:13.234902 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:13.234916 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:13.234927 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-07 00:31:13.234937 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-07 00:31:13.234948 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-07 00:31:13.234959 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-07 00:31:13.234970 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-07 00:31:13.235002 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-07 00:31:13.235014 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-07 00:31:13.235025 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-07 00:31:13.235036 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-07 00:31:13.235047 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-07 00:31:13.235058 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-07 00:31:13.235068 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-07 00:31:13.235079 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-07 00:31:13.235090 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-07 00:31:13.235101 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-07 00:31:13.235111 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-07 00:31:13.235122 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-07 00:31:13.235133 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-07 00:31:13.235144 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-07 00:31:13.235154 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-07 00:31:13.235165 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-07 00:31:13.235176 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-07 00:31:13.235186 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-07 00:31:13.235197 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-07 00:31:13.235208 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-07 00:31:13.235219 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-07 00:31:13.235230 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-07 00:31:13.235241 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-07 00:31:13.235252 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-07 00:31:13.235270 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-07 00:31:13.235282 | orchestrator | 2026-01-07 00:31:13.235300 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-07 00:31:13.235318 | orchestrator | Wednesday 07 January 2026 00:31:11 +0000 (0:00:05.735) 0:03:35.094 ***** 2026-01-07 00:31:13.235337 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:31:13.235354 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:31:13.235372 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:31:13.235390 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:31:13.235406 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:31:13.235425 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:31:13.235442 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-07 00:31:13.235462 | orchestrator | 2026-01-07 00:31:13.235481 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-07 00:31:13.235499 | orchestrator | Wednesday 07 January 2026 00:31:12 +0000 (0:00:01.658) 0:03:36.752 ***** 2026-01-07 00:31:13.235514 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:31:13.235532 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:13.235544 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:31:13.235555 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:31:13.235565 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:13.235576 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:13.235587 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:31:13.235620 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:13.235632 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:31:13.235643 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:31:13.235663 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:31:27.350199 | orchestrator | 2026-01-07 00:31:27.350339 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-07 00:31:27.350365 | orchestrator | Wednesday 07 January 2026 00:31:13 +0000 (0:00:00.532) 0:03:37.285 ***** 2026-01-07 00:31:27.350383 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:31:27.350402 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:31:27.350420 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:27.350439 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:27.350456 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:31:27.350474 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:27.350491 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-07 00:31:27.350508 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:27.350525 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:31:27.350542 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:31:27.350559 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-07 00:31:27.350634 | orchestrator | 2026-01-07 00:31:27.350654 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-07 00:31:27.350671 | orchestrator | Wednesday 07 January 2026 00:31:13 +0000 (0:00:00.671) 0:03:37.956 ***** 2026-01-07 00:31:27.350690 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-07 00:31:27.350708 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:27.350720 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-07 00:31:27.350732 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-07 00:31:27.350743 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:27.350753 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:27.350763 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-07 00:31:27.350772 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:27.350782 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-07 00:31:27.350792 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-07 00:31:27.350802 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-07 00:31:27.350811 | orchestrator | 2026-01-07 00:31:27.350821 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-07 00:31:27.350831 | orchestrator | Wednesday 07 January 2026 00:31:14 +0000 (0:00:00.569) 0:03:38.526 ***** 2026-01-07 00:31:27.350843 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:27.350859 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:27.350875 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:27.350891 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:27.350907 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:27.350924 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:27.350940 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:27.350957 | orchestrator | 2026-01-07 00:31:27.350973 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-07 00:31:27.350990 | orchestrator | Wednesday 07 January 2026 00:31:14 +0000 (0:00:00.311) 0:03:38.838 ***** 2026-01-07 00:31:27.351008 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:27.351025 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:27.351042 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:27.351058 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:27.351072 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:27.351082 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:27.351091 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:27.351101 | orchestrator | 2026-01-07 00:31:27.351111 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-07 00:31:27.351120 | orchestrator | Wednesday 07 January 2026 00:31:20 +0000 (0:00:05.962) 0:03:44.800 ***** 2026-01-07 00:31:27.351130 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-07 00:31:27.351140 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-07 00:31:27.351150 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:27.351160 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:27.351169 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-07 00:31:27.351179 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:27.351189 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-07 00:31:27.351199 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-07 00:31:27.351209 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:27.351219 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-07 00:31:27.351246 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:27.351256 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:27.351277 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-07 00:31:27.351287 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:27.351297 | orchestrator | 2026-01-07 00:31:27.351306 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-07 00:31:27.351316 | orchestrator | Wednesday 07 January 2026 00:31:21 +0000 (0:00:00.342) 0:03:45.143 ***** 2026-01-07 00:31:27.351325 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-07 00:31:27.351335 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-07 00:31:27.351345 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-07 00:31:27.351377 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-07 00:31:27.351387 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-07 00:31:27.351397 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-07 00:31:27.351406 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-07 00:31:27.351416 | orchestrator | 2026-01-07 00:31:27.351425 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-07 00:31:27.351435 | orchestrator | Wednesday 07 January 2026 00:31:22 +0000 (0:00:01.185) 0:03:46.329 ***** 2026-01-07 00:31:27.351447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:31:27.351459 | orchestrator | 2026-01-07 00:31:27.351469 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-07 00:31:27.351478 | orchestrator | Wednesday 07 January 2026 00:31:22 +0000 (0:00:00.587) 0:03:46.916 ***** 2026-01-07 00:31:27.351488 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:27.351497 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:27.351507 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:27.351516 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:27.351526 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:27.351535 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:27.351545 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:27.351554 | orchestrator | 2026-01-07 00:31:27.351564 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-07 00:31:27.351631 | orchestrator | Wednesday 07 January 2026 00:31:24 +0000 (0:00:01.443) 0:03:48.360 ***** 2026-01-07 00:31:27.351642 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:27.351652 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:27.351661 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:27.351670 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:27.351680 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:27.351689 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:27.351698 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:27.351708 | orchestrator | 2026-01-07 00:31:27.351717 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-07 00:31:27.351727 | orchestrator | Wednesday 07 January 2026 00:31:24 +0000 (0:00:00.640) 0:03:49.000 ***** 2026-01-07 00:31:27.351737 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:27.351746 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:27.351756 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:27.351765 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:27.351775 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:27.351784 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:27.351793 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:27.351803 | orchestrator | 2026-01-07 00:31:27.351812 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-07 00:31:27.351822 | orchestrator | Wednesday 07 January 2026 00:31:25 +0000 (0:00:00.752) 0:03:49.753 ***** 2026-01-07 00:31:27.351832 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:27.351841 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:27.351851 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:27.351860 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:27.351869 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:27.351879 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:27.351895 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:27.351905 | orchestrator | 2026-01-07 00:31:27.351915 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-07 00:31:27.351924 | orchestrator | Wednesday 07 January 2026 00:31:26 +0000 (0:00:00.649) 0:03:50.403 ***** 2026-01-07 00:31:27.351937 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744310.9591427, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:27.351956 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744333.693765, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:27.351967 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744339.3720322, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:27.352006 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744329.64347, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:32.486805 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744320.8046706, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:32.486918 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744336.1361618, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:32.486935 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767744328.8582683, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:32.486974 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:32.486986 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:32.487016 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:32.487038 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:32.487082 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:32.487103 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:32.487122 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 00:31:32.487157 | orchestrator | 2026-01-07 00:31:32.487180 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-07 00:31:32.487202 | orchestrator | Wednesday 07 January 2026 00:31:27 +0000 (0:00:01.001) 0:03:51.404 ***** 2026-01-07 00:31:32.487222 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:32.487242 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:32.487263 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:32.487283 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:32.487303 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:32.487322 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:32.487342 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:32.487362 | orchestrator | 2026-01-07 00:31:32.487382 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-07 00:31:32.487404 | orchestrator | Wednesday 07 January 2026 00:31:28 +0000 (0:00:01.199) 0:03:52.604 ***** 2026-01-07 00:31:32.487424 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:32.487444 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:32.487464 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:32.487484 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:32.487504 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:32.487522 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:32.487541 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:32.487558 | orchestrator | 2026-01-07 00:31:32.487615 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-07 00:31:32.487634 | orchestrator | Wednesday 07 January 2026 00:31:29 +0000 (0:00:01.239) 0:03:53.843 ***** 2026-01-07 00:31:32.487652 | orchestrator | changed: [testbed-manager] 2026-01-07 00:31:32.487672 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:31:32.487691 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:31:32.487709 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:31:32.487727 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:31:32.487745 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:31:32.487763 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:31:32.487779 | orchestrator | 2026-01-07 00:31:32.487806 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-07 00:31:32.487824 | orchestrator | Wednesday 07 January 2026 00:31:31 +0000 (0:00:01.229) 0:03:55.072 ***** 2026-01-07 00:31:32.487841 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:31:32.487860 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:31:32.487878 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:31:32.487896 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:31:32.487914 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:31:32.487931 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:31:32.487948 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:31:32.487967 | orchestrator | 2026-01-07 00:31:32.487985 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-07 00:31:32.488003 | orchestrator | Wednesday 07 January 2026 00:31:31 +0000 (0:00:00.306) 0:03:55.379 ***** 2026-01-07 00:31:32.488021 | orchestrator | ok: [testbed-manager] 2026-01-07 00:31:32.488040 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:31:32.488057 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:31:32.488073 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:31:32.488091 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:31:32.488108 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:31:32.488125 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:31:32.488143 | orchestrator | 2026-01-07 00:31:32.488162 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-07 00:31:32.488181 | orchestrator | Wednesday 07 January 2026 00:31:32 +0000 (0:00:00.760) 0:03:56.140 ***** 2026-01-07 00:31:32.488216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:31:32.488237 | orchestrator | 2026-01-07 00:31:32.488256 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-07 00:31:32.488291 | orchestrator | Wednesday 07 January 2026 00:31:32 +0000 (0:00:00.402) 0:03:56.542 ***** 2026-01-07 00:32:56.434792 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:56.434870 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:56.434878 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:56.434882 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:56.434886 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:56.434890 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:56.434894 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:56.434899 | orchestrator | 2026-01-07 00:32:56.434904 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-07 00:32:56.434909 | orchestrator | Wednesday 07 January 2026 00:31:42 +0000 (0:00:09.642) 0:04:06.185 ***** 2026-01-07 00:32:56.434913 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:56.434917 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:56.434921 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:56.434925 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:56.434929 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:56.434933 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:56.434936 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:56.434940 | orchestrator | 2026-01-07 00:32:56.434944 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-07 00:32:56.434948 | orchestrator | Wednesday 07 January 2026 00:31:43 +0000 (0:00:01.786) 0:04:07.971 ***** 2026-01-07 00:32:56.434951 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:56.434955 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:56.434959 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:56.434963 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:56.434966 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:56.434970 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:56.434974 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:56.434977 | orchestrator | 2026-01-07 00:32:56.434981 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-07 00:32:56.434985 | orchestrator | Wednesday 07 January 2026 00:31:45 +0000 (0:00:01.259) 0:04:09.231 ***** 2026-01-07 00:32:56.434989 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:56.434993 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:56.434997 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:56.435001 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:56.435005 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:56.435008 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:56.435012 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:56.435016 | orchestrator | 2026-01-07 00:32:56.435020 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-07 00:32:56.435024 | orchestrator | Wednesday 07 January 2026 00:31:45 +0000 (0:00:00.306) 0:04:09.538 ***** 2026-01-07 00:32:56.435028 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:56.435032 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:56.435037 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:56.435043 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:56.435049 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:56.435057 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:56.435063 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:56.435069 | orchestrator | 2026-01-07 00:32:56.435075 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-07 00:32:56.435081 | orchestrator | Wednesday 07 January 2026 00:31:45 +0000 (0:00:00.311) 0:04:09.849 ***** 2026-01-07 00:32:56.435088 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:56.435114 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:56.435120 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:56.435126 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:56.435132 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:56.435138 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:56.435144 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:56.435150 | orchestrator | 2026-01-07 00:32:56.435156 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-07 00:32:56.435162 | orchestrator | Wednesday 07 January 2026 00:31:46 +0000 (0:00:00.292) 0:04:10.142 ***** 2026-01-07 00:32:56.435167 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:56.435173 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:56.435179 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:56.435185 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:56.435190 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:56.435196 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:56.435202 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:56.435208 | orchestrator | 2026-01-07 00:32:56.435215 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-07 00:32:56.435221 | orchestrator | Wednesday 07 January 2026 00:31:51 +0000 (0:00:05.388) 0:04:15.531 ***** 2026-01-07 00:32:56.435231 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:32:56.435239 | orchestrator | 2026-01-07 00:32:56.435245 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-07 00:32:56.435252 | orchestrator | Wednesday 07 January 2026 00:31:51 +0000 (0:00:00.434) 0:04:15.965 ***** 2026-01-07 00:32:56.435258 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-07 00:32:56.435265 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-07 00:32:56.435272 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-07 00:32:56.435278 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:32:56.435284 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-07 00:32:56.435290 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-07 00:32:56.435314 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-07 00:32:56.435321 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:32:56.435328 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-07 00:32:56.435336 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-07 00:32:56.435342 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:32:56.435348 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-07 00:32:56.435354 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-07 00:32:56.435361 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:32:56.435368 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-07 00:32:56.435375 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-07 00:32:56.435399 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:32:56.435407 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:32:56.435414 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-07 00:32:56.435422 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-07 00:32:56.435429 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:32:56.435435 | orchestrator | 2026-01-07 00:32:56.435443 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-07 00:32:56.435450 | orchestrator | Wednesday 07 January 2026 00:31:52 +0000 (0:00:00.380) 0:04:16.346 ***** 2026-01-07 00:32:56.435496 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:32:56.435510 | orchestrator | 2026-01-07 00:32:56.435517 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-07 00:32:56.435523 | orchestrator | Wednesday 07 January 2026 00:31:52 +0000 (0:00:00.426) 0:04:16.772 ***** 2026-01-07 00:32:56.435528 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-07 00:32:56.435535 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-07 00:32:56.435542 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:32:56.435549 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-07 00:32:56.435556 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:32:56.435562 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:32:56.435569 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-07 00:32:56.435576 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:32:56.435583 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-07 00:32:56.435590 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-07 00:32:56.435597 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:32:56.435604 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:32:56.435611 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-07 00:32:56.435618 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:32:56.435625 | orchestrator | 2026-01-07 00:32:56.435632 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-07 00:32:56.435639 | orchestrator | Wednesday 07 January 2026 00:31:53 +0000 (0:00:00.354) 0:04:17.127 ***** 2026-01-07 00:32:56.435647 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:32:56.435654 | orchestrator | 2026-01-07 00:32:56.435661 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-07 00:32:56.435668 | orchestrator | Wednesday 07 January 2026 00:31:53 +0000 (0:00:00.445) 0:04:17.572 ***** 2026-01-07 00:32:56.435675 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:56.435681 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:56.435688 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:56.435695 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:56.435702 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:56.435709 | orchestrator | changed: [testbed-manager] 2026-01-07 00:32:56.435716 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:56.435723 | orchestrator | 2026-01-07 00:32:56.435730 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-07 00:32:56.435737 | orchestrator | Wednesday 07 January 2026 00:32:27 +0000 (0:00:34.482) 0:04:52.054 ***** 2026-01-07 00:32:56.435744 | orchestrator | changed: [testbed-manager] 2026-01-07 00:32:56.435751 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:56.435758 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:56.435765 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:56.435772 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:56.435783 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:56.435790 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:56.435797 | orchestrator | 2026-01-07 00:32:56.435804 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-07 00:32:56.435811 | orchestrator | Wednesday 07 January 2026 00:32:37 +0000 (0:00:09.463) 0:05:01.517 ***** 2026-01-07 00:32:56.435818 | orchestrator | changed: [testbed-manager] 2026-01-07 00:32:56.435825 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:56.435832 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:56.435838 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:56.435845 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:56.435852 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:56.435859 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:56.435872 | orchestrator | 2026-01-07 00:32:56.435880 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-07 00:32:56.435886 | orchestrator | Wednesday 07 January 2026 00:32:47 +0000 (0:00:09.898) 0:05:11.415 ***** 2026-01-07 00:32:56.435893 | orchestrator | ok: [testbed-manager] 2026-01-07 00:32:56.435900 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:32:56.435907 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:32:56.435914 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:32:56.435921 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:32:56.435928 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:32:56.435935 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:32:56.435942 | orchestrator | 2026-01-07 00:32:56.435949 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-07 00:32:56.435956 | orchestrator | Wednesday 07 January 2026 00:32:49 +0000 (0:00:01.963) 0:05:13.379 ***** 2026-01-07 00:32:56.435963 | orchestrator | changed: [testbed-manager] 2026-01-07 00:32:56.435970 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:32:56.435977 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:32:56.435984 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:32:56.435991 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:32:56.435998 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:32:56.436005 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:32:56.436012 | orchestrator | 2026-01-07 00:32:56.436025 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-07 00:33:08.224334 | orchestrator | Wednesday 07 January 2026 00:32:56 +0000 (0:00:07.107) 0:05:20.487 ***** 2026-01-07 00:33:08.224438 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:33:08.224503 | orchestrator | 2026-01-07 00:33:08.224515 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-07 00:33:08.224524 | orchestrator | Wednesday 07 January 2026 00:32:56 +0000 (0:00:00.571) 0:05:21.058 ***** 2026-01-07 00:33:08.224534 | orchestrator | changed: [testbed-manager] 2026-01-07 00:33:08.224545 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:33:08.224553 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:33:08.224562 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:33:08.224571 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:33:08.224580 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:33:08.224589 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:33:08.224598 | orchestrator | 2026-01-07 00:33:08.224607 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-07 00:33:08.224616 | orchestrator | Wednesday 07 January 2026 00:32:57 +0000 (0:00:00.836) 0:05:21.895 ***** 2026-01-07 00:33:08.224625 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:08.224634 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:08.224643 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:08.224651 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:08.224660 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:08.224669 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:08.224678 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:08.224686 | orchestrator | 2026-01-07 00:33:08.224695 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-07 00:33:08.224704 | orchestrator | Wednesday 07 January 2026 00:32:59 +0000 (0:00:01.853) 0:05:23.749 ***** 2026-01-07 00:33:08.224713 | orchestrator | changed: [testbed-manager] 2026-01-07 00:33:08.224721 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:33:08.224730 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:33:08.224740 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:33:08.224749 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:33:08.224757 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:33:08.224766 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:33:08.224774 | orchestrator | 2026-01-07 00:33:08.224783 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-07 00:33:08.224812 | orchestrator | Wednesday 07 January 2026 00:33:00 +0000 (0:00:00.854) 0:05:24.603 ***** 2026-01-07 00:33:08.224822 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:33:08.224830 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:33:08.224839 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:33:08.224848 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:33:08.224856 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:33:08.224865 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:33:08.224875 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:33:08.224886 | orchestrator | 2026-01-07 00:33:08.224896 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-07 00:33:08.224906 | orchestrator | Wednesday 07 January 2026 00:33:00 +0000 (0:00:00.294) 0:05:24.898 ***** 2026-01-07 00:33:08.224917 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:33:08.224927 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:33:08.224938 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:33:08.224948 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:33:08.224959 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:33:08.224969 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:33:08.224980 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:33:08.224990 | orchestrator | 2026-01-07 00:33:08.225000 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-07 00:33:08.225008 | orchestrator | Wednesday 07 January 2026 00:33:01 +0000 (0:00:00.379) 0:05:25.278 ***** 2026-01-07 00:33:08.225017 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:08.225026 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:08.225034 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:08.225043 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:08.225064 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:08.225073 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:08.225082 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:08.225090 | orchestrator | 2026-01-07 00:33:08.225099 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-07 00:33:08.225107 | orchestrator | Wednesday 07 January 2026 00:33:01 +0000 (0:00:00.284) 0:05:25.563 ***** 2026-01-07 00:33:08.225116 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:33:08.225124 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:33:08.225133 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:33:08.225142 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:33:08.225150 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:33:08.225158 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:33:08.225167 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:33:08.225175 | orchestrator | 2026-01-07 00:33:08.225184 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-07 00:33:08.225193 | orchestrator | Wednesday 07 January 2026 00:33:01 +0000 (0:00:00.294) 0:05:25.858 ***** 2026-01-07 00:33:08.225202 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:08.225210 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:08.225219 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:08.225227 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:08.225236 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:08.225245 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:08.225253 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:08.225262 | orchestrator | 2026-01-07 00:33:08.225271 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-07 00:33:08.225279 | orchestrator | Wednesday 07 January 2026 00:33:02 +0000 (0:00:00.330) 0:05:26.188 ***** 2026-01-07 00:33:08.225288 | orchestrator | ok: [testbed-manager] =>  2026-01-07 00:33:08.225296 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:33:08.225305 | orchestrator | ok: [testbed-node-3] =>  2026-01-07 00:33:08.225314 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:33:08.225322 | orchestrator | ok: [testbed-node-4] =>  2026-01-07 00:33:08.225331 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:33:08.225346 | orchestrator | ok: [testbed-node-5] =>  2026-01-07 00:33:08.225355 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:33:08.225381 | orchestrator | ok: [testbed-node-0] =>  2026-01-07 00:33:08.225391 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:33:08.225399 | orchestrator | ok: [testbed-node-1] =>  2026-01-07 00:33:08.225408 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:33:08.225416 | orchestrator | ok: [testbed-node-2] =>  2026-01-07 00:33:08.225424 | orchestrator |  docker_version: 5:27.5.1 2026-01-07 00:33:08.225433 | orchestrator | 2026-01-07 00:33:08.225468 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-07 00:33:08.225478 | orchestrator | Wednesday 07 January 2026 00:33:02 +0000 (0:00:00.312) 0:05:26.501 ***** 2026-01-07 00:33:08.225487 | orchestrator | ok: [testbed-manager] =>  2026-01-07 00:33:08.225495 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:33:08.225504 | orchestrator | ok: [testbed-node-3] =>  2026-01-07 00:33:08.225512 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:33:08.225521 | orchestrator | ok: [testbed-node-4] =>  2026-01-07 00:33:08.225529 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:33:08.225538 | orchestrator | ok: [testbed-node-5] =>  2026-01-07 00:33:08.225546 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:33:08.225554 | orchestrator | ok: [testbed-node-0] =>  2026-01-07 00:33:08.225563 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:33:08.225571 | orchestrator | ok: [testbed-node-1] =>  2026-01-07 00:33:08.225580 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:33:08.225588 | orchestrator | ok: [testbed-node-2] =>  2026-01-07 00:33:08.225597 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-07 00:33:08.225605 | orchestrator | 2026-01-07 00:33:08.225614 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-07 00:33:08.225623 | orchestrator | Wednesday 07 January 2026 00:33:02 +0000 (0:00:00.321) 0:05:26.822 ***** 2026-01-07 00:33:08.225631 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:33:08.225640 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:33:08.225648 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:33:08.225657 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:33:08.225665 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:33:08.225674 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:33:08.225682 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:33:08.225690 | orchestrator | 2026-01-07 00:33:08.225699 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-07 00:33:08.225708 | orchestrator | Wednesday 07 January 2026 00:33:03 +0000 (0:00:00.288) 0:05:27.111 ***** 2026-01-07 00:33:08.225716 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:33:08.225725 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:33:08.225733 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:33:08.225742 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:33:08.225750 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:33:08.225759 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:33:08.225767 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:33:08.225775 | orchestrator | 2026-01-07 00:33:08.225784 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-07 00:33:08.225793 | orchestrator | Wednesday 07 January 2026 00:33:03 +0000 (0:00:00.307) 0:05:27.418 ***** 2026-01-07 00:33:08.225804 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:33:08.225814 | orchestrator | 2026-01-07 00:33:08.225823 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-07 00:33:08.225832 | orchestrator | Wednesday 07 January 2026 00:33:03 +0000 (0:00:00.441) 0:05:27.860 ***** 2026-01-07 00:33:08.225840 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:08.225849 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:08.225864 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:08.225872 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:08.225881 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:08.225889 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:08.225898 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:08.225906 | orchestrator | 2026-01-07 00:33:08.225915 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-07 00:33:08.225928 | orchestrator | Wednesday 07 January 2026 00:33:04 +0000 (0:00:01.049) 0:05:28.910 ***** 2026-01-07 00:33:08.225937 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:33:08.225945 | orchestrator | ok: [testbed-manager] 2026-01-07 00:33:08.225954 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:33:08.225962 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:33:08.225971 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:33:08.225979 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:33:08.225987 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:33:08.225996 | orchestrator | 2026-01-07 00:33:08.226005 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-07 00:33:08.226014 | orchestrator | Wednesday 07 January 2026 00:33:07 +0000 (0:00:02.965) 0:05:31.875 ***** 2026-01-07 00:33:08.226078 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-07 00:33:08.226088 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-07 00:33:08.226097 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-07 00:33:08.226105 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:33:08.226114 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-07 00:33:08.226123 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-07 00:33:08.226131 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-07 00:33:08.226140 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-07 00:33:08.226149 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-07 00:33:08.226158 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-07 00:33:08.226166 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:33:08.226175 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-07 00:33:08.226183 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-07 00:33:08.226192 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-07 00:33:08.226201 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:33:08.226209 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-07 00:33:08.226225 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-07 00:34:14.035048 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-07 00:34:14.036054 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:14.036100 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-07 00:34:14.036113 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-07 00:34:14.036125 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-07 00:34:14.036136 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:14.036147 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:14.036159 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-07 00:34:14.036170 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-07 00:34:14.036181 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-07 00:34:14.036192 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:14.036204 | orchestrator | 2026-01-07 00:34:14.036216 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-07 00:34:14.036229 | orchestrator | Wednesday 07 January 2026 00:33:08 +0000 (0:00:00.617) 0:05:32.493 ***** 2026-01-07 00:34:14.036240 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:14.036252 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:14.036263 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:14.036275 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:14.036311 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:14.036349 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:14.036360 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:14.036371 | orchestrator | 2026-01-07 00:34:14.036383 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-07 00:34:14.036394 | orchestrator | Wednesday 07 January 2026 00:33:16 +0000 (0:00:07.804) 0:05:40.298 ***** 2026-01-07 00:34:14.036405 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:14.036416 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:14.036427 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:14.036438 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:14.036448 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:14.036460 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:14.036470 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:14.036481 | orchestrator | 2026-01-07 00:34:14.036492 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-07 00:34:14.036503 | orchestrator | Wednesday 07 January 2026 00:33:17 +0000 (0:00:01.163) 0:05:41.461 ***** 2026-01-07 00:34:14.036514 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:14.036525 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:14.036536 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:14.036547 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:14.036558 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:14.036569 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:14.036580 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:14.036591 | orchestrator | 2026-01-07 00:34:14.036602 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-07 00:34:14.036613 | orchestrator | Wednesday 07 January 2026 00:33:26 +0000 (0:00:09.186) 0:05:50.647 ***** 2026-01-07 00:34:14.036624 | orchestrator | changed: [testbed-manager] 2026-01-07 00:34:14.036635 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:14.036646 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:14.036657 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:14.036668 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:14.036678 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:14.036689 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:14.036700 | orchestrator | 2026-01-07 00:34:14.036711 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-07 00:34:14.036722 | orchestrator | Wednesday 07 January 2026 00:33:30 +0000 (0:00:03.477) 0:05:54.125 ***** 2026-01-07 00:34:14.036733 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:14.036744 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:14.036755 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:14.036766 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:14.036777 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:14.036787 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:14.036798 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:14.036809 | orchestrator | 2026-01-07 00:34:14.036820 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-07 00:34:14.036832 | orchestrator | Wednesday 07 January 2026 00:33:31 +0000 (0:00:01.389) 0:05:55.515 ***** 2026-01-07 00:34:14.036843 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:14.036853 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:14.036864 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:14.036875 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:14.036886 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:14.036897 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:14.036908 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:14.036919 | orchestrator | 2026-01-07 00:34:14.036930 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-07 00:34:14.036941 | orchestrator | Wednesday 07 January 2026 00:33:33 +0000 (0:00:01.593) 0:05:57.109 ***** 2026-01-07 00:34:14.036952 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:14.036963 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:14.036982 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:14.036993 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:14.037004 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:14.037015 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:14.037026 | orchestrator | changed: [testbed-manager] 2026-01-07 00:34:14.037037 | orchestrator | 2026-01-07 00:34:14.037048 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-07 00:34:14.037059 | orchestrator | Wednesday 07 January 2026 00:33:33 +0000 (0:00:00.615) 0:05:57.725 ***** 2026-01-07 00:34:14.037070 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:14.037081 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:14.037092 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:14.037103 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:14.037114 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:14.037125 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:14.037136 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:14.037147 | orchestrator | 2026-01-07 00:34:14.037158 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-07 00:34:14.037239 | orchestrator | Wednesday 07 January 2026 00:33:43 +0000 (0:00:10.219) 0:06:07.945 ***** 2026-01-07 00:34:14.037255 | orchestrator | changed: [testbed-manager] 2026-01-07 00:34:14.037266 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:14.037278 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:14.037289 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:14.037301 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:14.037312 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:14.037340 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:14.037352 | orchestrator | 2026-01-07 00:34:14.037364 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-07 00:34:14.037376 | orchestrator | Wednesday 07 January 2026 00:33:44 +0000 (0:00:01.097) 0:06:09.043 ***** 2026-01-07 00:34:14.037387 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:14.037399 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:14.037410 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:14.037421 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:14.037433 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:14.037444 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:14.037456 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:14.037467 | orchestrator | 2026-01-07 00:34:14.037479 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-07 00:34:14.037491 | orchestrator | Wednesday 07 January 2026 00:33:54 +0000 (0:00:09.908) 0:06:18.951 ***** 2026-01-07 00:34:14.037502 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:14.037514 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:14.037559 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:14.037571 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:14.037582 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:14.037594 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:14.037606 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:14.037617 | orchestrator | 2026-01-07 00:34:14.037629 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-07 00:34:14.037641 | orchestrator | Wednesday 07 January 2026 00:34:06 +0000 (0:00:12.004) 0:06:30.956 ***** 2026-01-07 00:34:14.037653 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-07 00:34:14.037665 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-07 00:34:14.037677 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-07 00:34:14.037688 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-07 00:34:14.037700 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-07 00:34:14.037711 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-07 00:34:14.037723 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-07 00:34:14.037735 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-07 00:34:14.037754 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-07 00:34:14.037766 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-07 00:34:14.037778 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-07 00:34:14.037790 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-07 00:34:14.037859 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-07 00:34:14.037872 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-07 00:34:14.037884 | orchestrator | 2026-01-07 00:34:14.037896 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-07 00:34:14.037907 | orchestrator | Wednesday 07 January 2026 00:34:08 +0000 (0:00:01.273) 0:06:32.229 ***** 2026-01-07 00:34:14.037919 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:14.037930 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:14.037941 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:14.037953 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:14.037964 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:14.037975 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:14.037987 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:14.037998 | orchestrator | 2026-01-07 00:34:14.038010 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-07 00:34:14.038079 | orchestrator | Wednesday 07 January 2026 00:34:08 +0000 (0:00:00.549) 0:06:32.779 ***** 2026-01-07 00:34:14.038093 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:14.038109 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:14.038121 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:14.038133 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:14.038144 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:14.038155 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:14.038166 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:14.038178 | orchestrator | 2026-01-07 00:34:14.038190 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-07 00:34:14.038203 | orchestrator | Wednesday 07 January 2026 00:34:13 +0000 (0:00:04.316) 0:06:37.095 ***** 2026-01-07 00:34:14.038214 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:14.038226 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:14.038237 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:14.038248 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:14.038260 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:14.038271 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:14.038282 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:14.038294 | orchestrator | 2026-01-07 00:34:14.038306 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-07 00:34:14.038337 | orchestrator | Wednesday 07 January 2026 00:34:13 +0000 (0:00:00.514) 0:06:37.610 ***** 2026-01-07 00:34:14.038349 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-07 00:34:14.038361 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-07 00:34:14.038373 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:14.038385 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-07 00:34:14.038397 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-07 00:34:14.038409 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:14.038459 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-07 00:34:14.038471 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-07 00:34:14.038482 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:14.038504 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-07 00:34:34.384764 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-07 00:34:34.384877 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:34.384893 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-07 00:34:34.384929 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-07 00:34:34.384941 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:34.384953 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-07 00:34:34.384963 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-07 00:34:34.384974 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:34.384984 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-07 00:34:34.384995 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-07 00:34:34.385006 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:34.385017 | orchestrator | 2026-01-07 00:34:34.385029 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-07 00:34:34.385042 | orchestrator | Wednesday 07 January 2026 00:34:14 +0000 (0:00:00.748) 0:06:38.358 ***** 2026-01-07 00:34:34.385053 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:34.385063 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:34.385074 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:34.385085 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:34.385096 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:34.385107 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:34.385117 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:34.385128 | orchestrator | 2026-01-07 00:34:34.385139 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-07 00:34:34.385150 | orchestrator | Wednesday 07 January 2026 00:34:14 +0000 (0:00:00.549) 0:06:38.908 ***** 2026-01-07 00:34:34.385161 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:34.385172 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:34.385182 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:34.385193 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:34.385204 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:34.385214 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:34.385225 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:34.385236 | orchestrator | 2026-01-07 00:34:34.385246 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-07 00:34:34.385257 | orchestrator | Wednesday 07 January 2026 00:34:15 +0000 (0:00:00.557) 0:06:39.465 ***** 2026-01-07 00:34:34.385268 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:34.385309 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:34:34.385322 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:34:34.385334 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:34:34.385346 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:34:34.385359 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:34:34.385372 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:34:34.385386 | orchestrator | 2026-01-07 00:34:34.385398 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-07 00:34:34.385411 | orchestrator | Wednesday 07 January 2026 00:34:15 +0000 (0:00:00.512) 0:06:39.977 ***** 2026-01-07 00:34:34.385424 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:34.385436 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:34.385448 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:34.385461 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:34.385474 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:34.385486 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:34.385498 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:34.385509 | orchestrator | 2026-01-07 00:34:34.385520 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-07 00:34:34.385531 | orchestrator | Wednesday 07 January 2026 00:34:17 +0000 (0:00:02.074) 0:06:42.052 ***** 2026-01-07 00:34:34.385544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:34:34.385565 | orchestrator | 2026-01-07 00:34:34.385604 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-07 00:34:34.385623 | orchestrator | Wednesday 07 January 2026 00:34:18 +0000 (0:00:00.913) 0:06:42.966 ***** 2026-01-07 00:34:34.385643 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:34.385663 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:34.385677 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:34.385687 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:34.385698 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:34.385709 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:34.385719 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:34.385730 | orchestrator | 2026-01-07 00:34:34.385741 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-07 00:34:34.385751 | orchestrator | Wednesday 07 January 2026 00:34:19 +0000 (0:00:00.886) 0:06:43.852 ***** 2026-01-07 00:34:34.385762 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:34.385773 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:34.385783 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:34.385794 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:34.385805 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:34.385815 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:34.385826 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:34.385836 | orchestrator | 2026-01-07 00:34:34.385847 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-07 00:34:34.385858 | orchestrator | Wednesday 07 January 2026 00:34:20 +0000 (0:00:00.874) 0:06:44.727 ***** 2026-01-07 00:34:34.385869 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:34.385879 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:34.385890 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:34.385901 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:34.385911 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:34.385922 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:34.385932 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:34.385943 | orchestrator | 2026-01-07 00:34:34.385954 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-07 00:34:34.385984 | orchestrator | Wednesday 07 January 2026 00:34:22 +0000 (0:00:01.675) 0:06:46.402 ***** 2026-01-07 00:34:34.385995 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:34:34.386006 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:34.386084 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:34.386096 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:34.386107 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:34.386118 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:34.386129 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:34.386139 | orchestrator | 2026-01-07 00:34:34.386151 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-07 00:34:34.386161 | orchestrator | Wednesday 07 January 2026 00:34:23 +0000 (0:00:01.547) 0:06:47.949 ***** 2026-01-07 00:34:34.386172 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:34.386183 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:34.386194 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:34.386205 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:34.386216 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:34.386226 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:34.386237 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:34.386248 | orchestrator | 2026-01-07 00:34:34.386259 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-07 00:34:34.386315 | orchestrator | Wednesday 07 January 2026 00:34:25 +0000 (0:00:01.429) 0:06:49.379 ***** 2026-01-07 00:34:34.386337 | orchestrator | changed: [testbed-manager] 2026-01-07 00:34:34.386356 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:34:34.386374 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:34:34.386393 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:34:34.386405 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:34:34.386415 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:34:34.386434 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:34:34.386445 | orchestrator | 2026-01-07 00:34:34.386456 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-07 00:34:34.386467 | orchestrator | Wednesday 07 January 2026 00:34:26 +0000 (0:00:01.452) 0:06:50.832 ***** 2026-01-07 00:34:34.386478 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:34:34.386490 | orchestrator | 2026-01-07 00:34:34.386501 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-07 00:34:34.386512 | orchestrator | Wednesday 07 January 2026 00:34:27 +0000 (0:00:01.022) 0:06:51.854 ***** 2026-01-07 00:34:34.386523 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:34.386534 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:34.386544 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:34.386555 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:34.386566 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:34.386577 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:34.386587 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:34.386598 | orchestrator | 2026-01-07 00:34:34.386609 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-07 00:34:34.386625 | orchestrator | Wednesday 07 January 2026 00:34:29 +0000 (0:00:01.419) 0:06:53.273 ***** 2026-01-07 00:34:34.386643 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:34.386661 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:34.386681 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:34.386699 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:34.386717 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:34.386736 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:34.386755 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:34.386773 | orchestrator | 2026-01-07 00:34:34.386792 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-07 00:34:34.386804 | orchestrator | Wednesday 07 January 2026 00:34:30 +0000 (0:00:01.157) 0:06:54.431 ***** 2026-01-07 00:34:34.386815 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:34.386825 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:34.386836 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:34.386847 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:34.386857 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:34.386868 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:34.386878 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:34.386889 | orchestrator | 2026-01-07 00:34:34.386926 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-07 00:34:34.386937 | orchestrator | Wednesday 07 January 2026 00:34:31 +0000 (0:00:01.318) 0:06:55.749 ***** 2026-01-07 00:34:34.386948 | orchestrator | ok: [testbed-manager] 2026-01-07 00:34:34.386959 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:34:34.386969 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:34:34.386980 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:34:34.386990 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:34:34.387001 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:34:34.387011 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:34:34.387022 | orchestrator | 2026-01-07 00:34:34.387033 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-07 00:34:34.387044 | orchestrator | Wednesday 07 January 2026 00:34:33 +0000 (0:00:01.471) 0:06:57.220 ***** 2026-01-07 00:34:34.387054 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:34:34.387065 | orchestrator | 2026-01-07 00:34:34.387076 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:34:34.387087 | orchestrator | Wednesday 07 January 2026 00:34:34 +0000 (0:00:00.911) 0:06:58.132 ***** 2026-01-07 00:34:34.387106 | orchestrator | 2026-01-07 00:34:34.387117 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:34:34.387128 | orchestrator | Wednesday 07 January 2026 00:34:34 +0000 (0:00:00.041) 0:06:58.173 ***** 2026-01-07 00:34:34.387138 | orchestrator | 2026-01-07 00:34:34.387149 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:34:34.387160 | orchestrator | Wednesday 07 January 2026 00:34:34 +0000 (0:00:00.038) 0:06:58.212 ***** 2026-01-07 00:34:34.387171 | orchestrator | 2026-01-07 00:34:34.387182 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:34:34.387203 | orchestrator | Wednesday 07 January 2026 00:34:34 +0000 (0:00:00.047) 0:06:58.259 ***** 2026-01-07 00:35:01.823753 | orchestrator | 2026-01-07 00:35:01.823867 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:35:01.823887 | orchestrator | Wednesday 07 January 2026 00:34:34 +0000 (0:00:00.040) 0:06:58.300 ***** 2026-01-07 00:35:01.823899 | orchestrator | 2026-01-07 00:35:01.823911 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:35:01.823922 | orchestrator | Wednesday 07 January 2026 00:34:34 +0000 (0:00:00.041) 0:06:58.341 ***** 2026-01-07 00:35:01.823933 | orchestrator | 2026-01-07 00:35:01.823944 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-07 00:35:01.823956 | orchestrator | Wednesday 07 January 2026 00:34:34 +0000 (0:00:00.049) 0:06:58.390 ***** 2026-01-07 00:35:01.823967 | orchestrator | 2026-01-07 00:35:01.823978 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-07 00:35:01.823989 | orchestrator | Wednesday 07 January 2026 00:34:34 +0000 (0:00:00.040) 0:06:58.431 ***** 2026-01-07 00:35:01.824000 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:01.824012 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:01.824023 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:01.824034 | orchestrator | 2026-01-07 00:35:01.824046 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-07 00:35:01.824057 | orchestrator | Wednesday 07 January 2026 00:34:35 +0000 (0:00:01.612) 0:07:00.044 ***** 2026-01-07 00:35:01.824068 | orchestrator | changed: [testbed-manager] 2026-01-07 00:35:01.824080 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:01.824091 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:01.824102 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:01.824113 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:01.824124 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:01.824135 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:01.824145 | orchestrator | 2026-01-07 00:35:01.824156 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-07 00:35:01.824168 | orchestrator | Wednesday 07 January 2026 00:34:37 +0000 (0:00:01.336) 0:07:01.380 ***** 2026-01-07 00:35:01.824179 | orchestrator | changed: [testbed-manager] 2026-01-07 00:35:01.824190 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:01.824201 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:01.824238 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:01.824250 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:01.824260 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:01.824271 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:01.824282 | orchestrator | 2026-01-07 00:35:01.824294 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-07 00:35:01.824305 | orchestrator | Wednesday 07 January 2026 00:34:38 +0000 (0:00:01.463) 0:07:02.843 ***** 2026-01-07 00:35:01.824316 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:35:01.824326 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:01.824337 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:01.824348 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:01.824359 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:01.824370 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:01.824381 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:01.824415 | orchestrator | 2026-01-07 00:35:01.824427 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-07 00:35:01.824438 | orchestrator | Wednesday 07 January 2026 00:34:41 +0000 (0:00:02.522) 0:07:05.366 ***** 2026-01-07 00:35:01.824449 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:01.824461 | orchestrator | 2026-01-07 00:35:01.824472 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-07 00:35:01.824482 | orchestrator | Wednesday 07 January 2026 00:34:41 +0000 (0:00:00.112) 0:07:05.478 ***** 2026-01-07 00:35:01.824493 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:01.824504 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:01.824515 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:01.824526 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:01.824537 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:01.824547 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:01.824558 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:01.824569 | orchestrator | 2026-01-07 00:35:01.824595 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-07 00:35:01.824607 | orchestrator | Wednesday 07 January 2026 00:34:42 +0000 (0:00:01.104) 0:07:06.583 ***** 2026-01-07 00:35:01.824618 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:35:01.824629 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:01.824639 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:35:01.824650 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:35:01.824661 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:35:01.824672 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:35:01.824682 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:35:01.824693 | orchestrator | 2026-01-07 00:35:01.824703 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-07 00:35:01.824714 | orchestrator | Wednesday 07 January 2026 00:34:43 +0000 (0:00:00.600) 0:07:07.183 ***** 2026-01-07 00:35:01.824726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:35:01.824740 | orchestrator | 2026-01-07 00:35:01.824750 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-07 00:35:01.824762 | orchestrator | Wednesday 07 January 2026 00:34:44 +0000 (0:00:01.136) 0:07:08.319 ***** 2026-01-07 00:35:01.824772 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:01.824783 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:01.824794 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:01.824805 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:01.824816 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:01.824827 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:01.824838 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:01.824849 | orchestrator | 2026-01-07 00:35:01.824860 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-07 00:35:01.824871 | orchestrator | Wednesday 07 January 2026 00:34:45 +0000 (0:00:00.921) 0:07:09.241 ***** 2026-01-07 00:35:01.824882 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-07 00:35:01.824911 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-07 00:35:01.824923 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-07 00:35:01.824934 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-07 00:35:01.824945 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-07 00:35:01.824956 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-07 00:35:01.824967 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-07 00:35:01.824978 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-07 00:35:01.824988 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-07 00:35:01.825000 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-07 00:35:01.825019 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-07 00:35:01.825030 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-07 00:35:01.825041 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-07 00:35:01.825052 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-07 00:35:01.825062 | orchestrator | 2026-01-07 00:35:01.825073 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-07 00:35:01.825084 | orchestrator | Wednesday 07 January 2026 00:34:47 +0000 (0:00:02.582) 0:07:11.823 ***** 2026-01-07 00:35:01.825095 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:35:01.825106 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:01.825117 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:35:01.825128 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:35:01.825138 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:35:01.825149 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:35:01.825160 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:35:01.825170 | orchestrator | 2026-01-07 00:35:01.825181 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-07 00:35:01.825192 | orchestrator | Wednesday 07 January 2026 00:34:48 +0000 (0:00:00.723) 0:07:12.547 ***** 2026-01-07 00:35:01.825205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:35:01.825234 | orchestrator | 2026-01-07 00:35:01.825245 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-07 00:35:01.825256 | orchestrator | Wednesday 07 January 2026 00:34:49 +0000 (0:00:00.841) 0:07:13.388 ***** 2026-01-07 00:35:01.825267 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:01.825278 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:01.825289 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:01.825299 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:01.825310 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:01.825321 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:01.825331 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:01.825342 | orchestrator | 2026-01-07 00:35:01.825353 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-07 00:35:01.825364 | orchestrator | Wednesday 07 January 2026 00:34:50 +0000 (0:00:00.894) 0:07:14.283 ***** 2026-01-07 00:35:01.825375 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:01.825386 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:01.825396 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:01.825407 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:01.825417 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:01.825428 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:01.825439 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:01.825449 | orchestrator | 2026-01-07 00:35:01.825460 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-07 00:35:01.825471 | orchestrator | Wednesday 07 January 2026 00:34:51 +0000 (0:00:01.031) 0:07:15.315 ***** 2026-01-07 00:35:01.825482 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:35:01.825498 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:01.825509 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:35:01.825520 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:35:01.825531 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:35:01.825541 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:35:01.825552 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:35:01.825563 | orchestrator | 2026-01-07 00:35:01.825574 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-07 00:35:01.825584 | orchestrator | Wednesday 07 January 2026 00:34:51 +0000 (0:00:00.520) 0:07:15.835 ***** 2026-01-07 00:35:01.825595 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:01.825606 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:01.825617 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:01.825649 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:01.825667 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:01.825684 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:01.825695 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:01.825706 | orchestrator | 2026-01-07 00:35:01.825716 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-07 00:35:01.825727 | orchestrator | Wednesday 07 January 2026 00:34:53 +0000 (0:00:01.505) 0:07:17.341 ***** 2026-01-07 00:35:01.825738 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:35:01.825749 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:01.825760 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:35:01.825770 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:35:01.825781 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:35:01.825792 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:35:01.825802 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:35:01.825813 | orchestrator | 2026-01-07 00:35:01.825824 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-07 00:35:01.825835 | orchestrator | Wednesday 07 January 2026 00:34:53 +0000 (0:00:00.536) 0:07:17.877 ***** 2026-01-07 00:35:01.825846 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:01.825857 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:01.825867 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:01.825878 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:01.825889 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:01.825899 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:01.825917 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:37.245209 | orchestrator | 2026-01-07 00:35:37.245329 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-07 00:35:37.245348 | orchestrator | Wednesday 07 January 2026 00:35:01 +0000 (0:00:07.999) 0:07:25.877 ***** 2026-01-07 00:35:37.245361 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:37.245373 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:37.245385 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:37.245396 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:37.245407 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:37.245417 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:37.245428 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:37.245439 | orchestrator | 2026-01-07 00:35:37.245450 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-07 00:35:37.245461 | orchestrator | Wednesday 07 January 2026 00:35:03 +0000 (0:00:01.797) 0:07:27.674 ***** 2026-01-07 00:35:37.245472 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:37.245483 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:37.245494 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:37.245504 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:37.245515 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:37.245526 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:37.245536 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:37.245547 | orchestrator | 2026-01-07 00:35:37.245558 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-07 00:35:37.245569 | orchestrator | Wednesday 07 January 2026 00:35:05 +0000 (0:00:01.805) 0:07:29.480 ***** 2026-01-07 00:35:37.245580 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:37.245591 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:37.245601 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:37.245612 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:37.245622 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:37.245633 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:37.245644 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:37.245654 | orchestrator | 2026-01-07 00:35:37.245665 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-07 00:35:37.245676 | orchestrator | Wednesday 07 January 2026 00:35:07 +0000 (0:00:01.719) 0:07:31.200 ***** 2026-01-07 00:35:37.245711 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:37.245725 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:37.245738 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:37.245750 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:37.245762 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:37.245775 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:37.245786 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:37.245796 | orchestrator | 2026-01-07 00:35:37.245807 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-07 00:35:37.245818 | orchestrator | Wednesday 07 January 2026 00:35:08 +0000 (0:00:00.904) 0:07:32.105 ***** 2026-01-07 00:35:37.245829 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:35:37.245840 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:37.245851 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:35:37.245861 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:35:37.245872 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:35:37.245883 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:35:37.245893 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:35:37.245904 | orchestrator | 2026-01-07 00:35:37.245915 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-07 00:35:37.245925 | orchestrator | Wednesday 07 January 2026 00:35:09 +0000 (0:00:01.035) 0:07:33.141 ***** 2026-01-07 00:35:37.245936 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:35:37.245947 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:37.245958 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:35:37.245968 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:35:37.245979 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:35:37.245990 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:35:37.246000 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:35:37.246011 | orchestrator | 2026-01-07 00:35:37.246082 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-07 00:35:37.246094 | orchestrator | Wednesday 07 January 2026 00:35:09 +0000 (0:00:00.570) 0:07:33.711 ***** 2026-01-07 00:35:37.246129 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:37.246143 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:37.246154 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:37.246182 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:37.246194 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:37.246204 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:37.246215 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:37.246225 | orchestrator | 2026-01-07 00:35:37.246236 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-07 00:35:37.246247 | orchestrator | Wednesday 07 January 2026 00:35:10 +0000 (0:00:00.539) 0:07:34.251 ***** 2026-01-07 00:35:37.246259 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:37.246277 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:37.246295 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:37.246314 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:37.246327 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:37.246338 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:37.246349 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:37.246359 | orchestrator | 2026-01-07 00:35:37.246370 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-07 00:35:37.246381 | orchestrator | Wednesday 07 January 2026 00:35:10 +0000 (0:00:00.560) 0:07:34.812 ***** 2026-01-07 00:35:37.246392 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:37.246403 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:37.246413 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:37.246424 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:37.246434 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:37.246445 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:37.246456 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:37.246466 | orchestrator | 2026-01-07 00:35:37.246477 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-07 00:35:37.246488 | orchestrator | Wednesday 07 January 2026 00:35:11 +0000 (0:00:00.780) 0:07:35.592 ***** 2026-01-07 00:35:37.246508 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:37.246519 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:37.246530 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:37.246541 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:37.246551 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:37.246562 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:37.246572 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:37.246583 | orchestrator | 2026-01-07 00:35:37.246613 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-07 00:35:37.246624 | orchestrator | Wednesday 07 January 2026 00:35:17 +0000 (0:00:05.918) 0:07:41.511 ***** 2026-01-07 00:35:37.246635 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:35:37.246646 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:35:37.246656 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:35:37.246667 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:35:37.246678 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:35:37.246692 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:35:37.246710 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:35:37.246727 | orchestrator | 2026-01-07 00:35:37.246745 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-07 00:35:37.246763 | orchestrator | Wednesday 07 January 2026 00:35:18 +0000 (0:00:00.583) 0:07:42.095 ***** 2026-01-07 00:35:37.246784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:35:37.246805 | orchestrator | 2026-01-07 00:35:37.246816 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-07 00:35:37.246827 | orchestrator | Wednesday 07 January 2026 00:35:19 +0000 (0:00:01.093) 0:07:43.188 ***** 2026-01-07 00:35:37.246837 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:37.246848 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:37.246859 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:37.246869 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:37.246880 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:37.246890 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:37.246900 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:37.246911 | orchestrator | 2026-01-07 00:35:37.246922 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-07 00:35:37.246933 | orchestrator | Wednesday 07 January 2026 00:35:21 +0000 (0:00:02.218) 0:07:45.407 ***** 2026-01-07 00:35:37.246943 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:37.246954 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:37.246964 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:37.246974 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:37.246985 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:37.246995 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:37.247005 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:37.247016 | orchestrator | 2026-01-07 00:35:37.247026 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-07 00:35:37.247037 | orchestrator | Wednesday 07 January 2026 00:35:22 +0000 (0:00:01.231) 0:07:46.638 ***** 2026-01-07 00:35:37.247047 | orchestrator | ok: [testbed-manager] 2026-01-07 00:35:37.247057 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:35:37.247068 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:35:37.247078 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:35:37.247088 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:35:37.247099 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:35:37.247224 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:35:37.247249 | orchestrator | 2026-01-07 00:35:37.247268 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-07 00:35:37.247285 | orchestrator | Wednesday 07 January 2026 00:35:23 +0000 (0:00:00.877) 0:07:47.516 ***** 2026-01-07 00:35:37.247302 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:35:37.247325 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:35:37.247336 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:35:37.247355 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:35:37.247366 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:35:37.247377 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:35:37.247388 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-07 00:35:37.247398 | orchestrator | 2026-01-07 00:35:37.247409 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-07 00:35:37.247420 | orchestrator | Wednesday 07 January 2026 00:35:25 +0000 (0:00:01.958) 0:07:49.474 ***** 2026-01-07 00:35:37.247431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:35:37.247442 | orchestrator | 2026-01-07 00:35:37.247453 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-07 00:35:37.247464 | orchestrator | Wednesday 07 January 2026 00:35:26 +0000 (0:00:00.814) 0:07:50.289 ***** 2026-01-07 00:35:37.247474 | orchestrator | changed: [testbed-manager] 2026-01-07 00:35:37.247485 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:35:37.247496 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:35:37.247506 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:35:37.247517 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:35:37.247527 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:35:37.247538 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:35:37.247549 | orchestrator | 2026-01-07 00:35:37.247570 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-07 00:36:10.267899 | orchestrator | Wednesday 07 January 2026 00:35:37 +0000 (0:00:11.009) 0:08:01.299 ***** 2026-01-07 00:36:10.268059 | orchestrator | ok: [testbed-manager] 2026-01-07 00:36:10.268070 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:36:10.268075 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:36:10.268080 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:36:10.268084 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:36:10.268089 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:36:10.268093 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:36:10.268097 | orchestrator | 2026-01-07 00:36:10.268103 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-07 00:36:10.268108 | orchestrator | Wednesday 07 January 2026 00:35:39 +0000 (0:00:02.056) 0:08:03.356 ***** 2026-01-07 00:36:10.268112 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:36:10.268117 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:36:10.268121 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:36:10.268138 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:36:10.268142 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:36:10.268146 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:36:10.268151 | orchestrator | 2026-01-07 00:36:10.268155 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-07 00:36:10.268159 | orchestrator | Wednesday 07 January 2026 00:35:40 +0000 (0:00:01.442) 0:08:04.798 ***** 2026-01-07 00:36:10.268164 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:10.268169 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:10.268189 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:10.268194 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:10.268198 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:10.268202 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:10.268206 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:10.268210 | orchestrator | 2026-01-07 00:36:10.268214 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-07 00:36:10.268219 | orchestrator | 2026-01-07 00:36:10.268223 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-07 00:36:10.268227 | orchestrator | Wednesday 07 January 2026 00:35:42 +0000 (0:00:01.431) 0:08:06.230 ***** 2026-01-07 00:36:10.268231 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:36:10.268235 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:36:10.268239 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:36:10.268243 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:36:10.268247 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:36:10.268252 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:36:10.268256 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:36:10.268260 | orchestrator | 2026-01-07 00:36:10.268264 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-07 00:36:10.268268 | orchestrator | 2026-01-07 00:36:10.268272 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-07 00:36:10.268277 | orchestrator | Wednesday 07 January 2026 00:35:42 +0000 (0:00:00.769) 0:08:06.999 ***** 2026-01-07 00:36:10.268281 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:10.268285 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:10.268289 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:10.268293 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:10.268297 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:10.268301 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:10.268305 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:10.268309 | orchestrator | 2026-01-07 00:36:10.268313 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-07 00:36:10.268318 | orchestrator | Wednesday 07 January 2026 00:35:44 +0000 (0:00:01.467) 0:08:08.467 ***** 2026-01-07 00:36:10.268322 | orchestrator | ok: [testbed-manager] 2026-01-07 00:36:10.268326 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:36:10.268330 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:36:10.268334 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:36:10.268338 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:36:10.268342 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:36:10.268346 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:36:10.268350 | orchestrator | 2026-01-07 00:36:10.268354 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-07 00:36:10.268358 | orchestrator | Wednesday 07 January 2026 00:35:45 +0000 (0:00:01.586) 0:08:10.054 ***** 2026-01-07 00:36:10.268402 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:36:10.268407 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:36:10.268411 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:36:10.268416 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:36:10.268420 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:36:10.268424 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:36:10.268428 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:36:10.268432 | orchestrator | 2026-01-07 00:36:10.268437 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-07 00:36:10.268441 | orchestrator | Wednesday 07 January 2026 00:35:46 +0000 (0:00:00.507) 0:08:10.561 ***** 2026-01-07 00:36:10.268446 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:36:10.268451 | orchestrator | 2026-01-07 00:36:10.268456 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-07 00:36:10.268461 | orchestrator | Wednesday 07 January 2026 00:35:47 +0000 (0:00:00.973) 0:08:11.535 ***** 2026-01-07 00:36:10.268472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:36:10.268479 | orchestrator | 2026-01-07 00:36:10.268484 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-07 00:36:10.268489 | orchestrator | Wednesday 07 January 2026 00:35:48 +0000 (0:00:00.755) 0:08:12.291 ***** 2026-01-07 00:36:10.268494 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:10.268498 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:10.268503 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:10.268508 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:10.268513 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:10.268517 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:10.268522 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:10.268527 | orchestrator | 2026-01-07 00:36:10.268545 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-07 00:36:10.268550 | orchestrator | Wednesday 07 January 2026 00:35:57 +0000 (0:00:09.319) 0:08:21.610 ***** 2026-01-07 00:36:10.268555 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:10.268560 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:10.268565 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:10.268569 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:10.268574 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:10.268579 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:10.268584 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:10.268588 | orchestrator | 2026-01-07 00:36:10.268592 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-07 00:36:10.268596 | orchestrator | Wednesday 07 January 2026 00:35:58 +0000 (0:00:01.206) 0:08:22.817 ***** 2026-01-07 00:36:10.268601 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:10.268605 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:10.268609 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:10.268613 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:10.268617 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:10.268621 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:10.268625 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:10.268629 | orchestrator | 2026-01-07 00:36:10.268633 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-07 00:36:10.268637 | orchestrator | Wednesday 07 January 2026 00:36:00 +0000 (0:00:01.359) 0:08:24.176 ***** 2026-01-07 00:36:10.268641 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:10.268645 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:10.268649 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:10.268653 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:10.268657 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:10.268661 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:10.268665 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:10.268669 | orchestrator | 2026-01-07 00:36:10.268673 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-07 00:36:10.268677 | orchestrator | Wednesday 07 January 2026 00:36:02 +0000 (0:00:02.217) 0:08:26.395 ***** 2026-01-07 00:36:10.268681 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:10.268685 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:10.268690 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:10.268694 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:10.268698 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:10.268702 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:10.268706 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:10.268710 | orchestrator | 2026-01-07 00:36:10.268714 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-07 00:36:10.268718 | orchestrator | Wednesday 07 January 2026 00:36:03 +0000 (0:00:01.451) 0:08:27.847 ***** 2026-01-07 00:36:10.268728 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:10.268732 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:10.268736 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:10.268740 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:10.268744 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:10.268748 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:10.268752 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:10.268756 | orchestrator | 2026-01-07 00:36:10.268760 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-07 00:36:10.268764 | orchestrator | 2026-01-07 00:36:10.268768 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-07 00:36:10.268772 | orchestrator | Wednesday 07 January 2026 00:36:04 +0000 (0:00:01.197) 0:08:29.045 ***** 2026-01-07 00:36:10.268777 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:36:10.268781 | orchestrator | 2026-01-07 00:36:10.268785 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-07 00:36:10.268789 | orchestrator | Wednesday 07 January 2026 00:36:05 +0000 (0:00:00.822) 0:08:29.867 ***** 2026-01-07 00:36:10.268796 | orchestrator | ok: [testbed-manager] 2026-01-07 00:36:10.268800 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:36:10.268804 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:36:10.268808 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:36:10.268812 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:36:10.268816 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:36:10.268820 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:36:10.268824 | orchestrator | 2026-01-07 00:36:10.268828 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-07 00:36:10.268833 | orchestrator | Wednesday 07 January 2026 00:36:06 +0000 (0:00:01.119) 0:08:30.987 ***** 2026-01-07 00:36:10.268837 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:10.268841 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:10.268845 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:10.268849 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:10.268853 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:10.268857 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:10.268861 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:10.268865 | orchestrator | 2026-01-07 00:36:10.268870 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-07 00:36:10.268874 | orchestrator | Wednesday 07 January 2026 00:36:08 +0000 (0:00:01.263) 0:08:32.250 ***** 2026-01-07 00:36:10.268878 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:36:10.268882 | orchestrator | 2026-01-07 00:36:10.268886 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-07 00:36:10.268890 | orchestrator | Wednesday 07 January 2026 00:36:09 +0000 (0:00:01.166) 0:08:33.417 ***** 2026-01-07 00:36:10.268894 | orchestrator | ok: [testbed-manager] 2026-01-07 00:36:10.268898 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:36:10.268902 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:36:10.268907 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:36:10.268911 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:36:10.268915 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:36:10.268919 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:36:10.268923 | orchestrator | 2026-01-07 00:36:10.268930 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-07 00:36:11.867381 | orchestrator | Wednesday 07 January 2026 00:36:10 +0000 (0:00:00.904) 0:08:34.321 ***** 2026-01-07 00:36:11.867472 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:11.867482 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:11.867489 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:11.867497 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:11.867503 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:11.867537 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:11.867543 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:11.867549 | orchestrator | 2026-01-07 00:36:11.867557 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:36:11.867565 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-07 00:36:11.867573 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-07 00:36:11.867580 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-07 00:36:11.867586 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-07 00:36:11.867592 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-07 00:36:11.867598 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-07 00:36:11.867605 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-07 00:36:11.867610 | orchestrator | 2026-01-07 00:36:11.867616 | orchestrator | 2026-01-07 00:36:11.867623 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:36:11.867630 | orchestrator | Wednesday 07 January 2026 00:36:11 +0000 (0:00:01.100) 0:08:35.421 ***** 2026-01-07 00:36:11.867635 | orchestrator | =============================================================================== 2026-01-07 00:36:11.867641 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.90s 2026-01-07 00:36:11.867648 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.67s 2026-01-07 00:36:11.867654 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.48s 2026-01-07 00:36:11.867660 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.31s 2026-01-07 00:36:11.867666 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.18s 2026-01-07 00:36:11.867673 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.00s 2026-01-07 00:36:11.867679 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.23s 2026-01-07 00:36:11.867686 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 11.01s 2026-01-07 00:36:11.867692 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.22s 2026-01-07 00:36:11.867699 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.91s 2026-01-07 00:36:11.867718 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 9.90s 2026-01-07 00:36:11.867724 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.64s 2026-01-07 00:36:11.867730 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.46s 2026-01-07 00:36:11.867736 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.32s 2026-01-07 00:36:11.867742 | orchestrator | osism.services.docker : Add repository ---------------------------------- 9.19s 2026-01-07 00:36:11.867748 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.00s 2026-01-07 00:36:11.867754 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.80s 2026-01-07 00:36:11.867760 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.11s 2026-01-07 00:36:11.867767 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.96s 2026-01-07 00:36:11.867777 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.92s 2026-01-07 00:36:12.195469 | orchestrator | + osism apply fail2ban 2026-01-07 00:36:24.879644 | orchestrator | 2026-01-07 00:36:24 | INFO  | Task d5973456-c000-4ccd-9fad-8eeba4a25e6c (fail2ban) was prepared for execution. 2026-01-07 00:36:24.879756 | orchestrator | 2026-01-07 00:36:24 | INFO  | It takes a moment until task d5973456-c000-4ccd-9fad-8eeba4a25e6c (fail2ban) has been started and output is visible here. 2026-01-07 00:36:47.142350 | orchestrator | 2026-01-07 00:36:47.142451 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-07 00:36:47.142464 | orchestrator | 2026-01-07 00:36:47.142472 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-07 00:36:47.142481 | orchestrator | Wednesday 07 January 2026 00:36:29 +0000 (0:00:00.269) 0:00:00.269 ***** 2026-01-07 00:36:47.142491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:36:47.142503 | orchestrator | 2026-01-07 00:36:47.142516 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-07 00:36:47.142525 | orchestrator | Wednesday 07 January 2026 00:36:30 +0000 (0:00:01.215) 0:00:01.484 ***** 2026-01-07 00:36:47.142537 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:47.142550 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:47.142562 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:47.142575 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:47.142586 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:47.142599 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:47.142611 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:47.142621 | orchestrator | 2026-01-07 00:36:47.142634 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-07 00:36:47.142647 | orchestrator | Wednesday 07 January 2026 00:36:41 +0000 (0:00:11.198) 0:00:12.683 ***** 2026-01-07 00:36:47.142659 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:47.142671 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:47.142679 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:47.142687 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:47.142694 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:47.142701 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:47.142708 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:47.142716 | orchestrator | 2026-01-07 00:36:47.142723 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-07 00:36:47.142731 | orchestrator | Wednesday 07 January 2026 00:36:43 +0000 (0:00:01.522) 0:00:14.205 ***** 2026-01-07 00:36:47.142738 | orchestrator | ok: [testbed-manager] 2026-01-07 00:36:47.142747 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:36:47.142754 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:36:47.142761 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:36:47.142768 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:36:47.142775 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:36:47.142782 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:36:47.142789 | orchestrator | 2026-01-07 00:36:47.142796 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-07 00:36:47.142803 | orchestrator | Wednesday 07 January 2026 00:36:44 +0000 (0:00:01.558) 0:00:15.764 ***** 2026-01-07 00:36:47.142810 | orchestrator | changed: [testbed-manager] 2026-01-07 00:36:47.142872 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:36:47.142880 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:36:47.142887 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:36:47.142895 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:36:47.142903 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:36:47.142911 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:36:47.142919 | orchestrator | 2026-01-07 00:36:47.142949 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:36:47.142958 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:47.142967 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:47.142976 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:47.142984 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:47.142993 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:47.143001 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:47.143009 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:36:47.143018 | orchestrator | 2026-01-07 00:36:47.143026 | orchestrator | 2026-01-07 00:36:47.143034 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:36:47.143043 | orchestrator | Wednesday 07 January 2026 00:36:46 +0000 (0:00:01.699) 0:00:17.463 ***** 2026-01-07 00:36:47.143051 | orchestrator | =============================================================================== 2026-01-07 00:36:47.143060 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.20s 2026-01-07 00:36:47.143067 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.70s 2026-01-07 00:36:47.143075 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.56s 2026-01-07 00:36:47.143084 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.52s 2026-01-07 00:36:47.143092 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.22s 2026-01-07 00:36:47.466213 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-07 00:36:47.466315 | orchestrator | + osism apply network 2026-01-07 00:36:59.454281 | orchestrator | 2026-01-07 00:36:59 | INFO  | Task 3a79b6c6-8617-474a-8d5b-4aceac259126 (network) was prepared for execution. 2026-01-07 00:36:59.454402 | orchestrator | 2026-01-07 00:36:59 | INFO  | It takes a moment until task 3a79b6c6-8617-474a-8d5b-4aceac259126 (network) has been started and output is visible here. 2026-01-07 00:37:28.516395 | orchestrator | 2026-01-07 00:37:28.516470 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-07 00:37:28.516478 | orchestrator | 2026-01-07 00:37:28.516491 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-07 00:37:28.516496 | orchestrator | Wednesday 07 January 2026 00:37:03 +0000 (0:00:00.251) 0:00:00.251 ***** 2026-01-07 00:37:28.516501 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:28.516506 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:28.516510 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:28.516514 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:28.516518 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:28.516522 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:28.516526 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:28.516530 | orchestrator | 2026-01-07 00:37:28.516534 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-07 00:37:28.516538 | orchestrator | Wednesday 07 January 2026 00:37:04 +0000 (0:00:00.701) 0:00:00.953 ***** 2026-01-07 00:37:28.516543 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:37:28.516564 | orchestrator | 2026-01-07 00:37:28.516568 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-07 00:37:28.516572 | orchestrator | Wednesday 07 January 2026 00:37:05 +0000 (0:00:01.196) 0:00:02.150 ***** 2026-01-07 00:37:28.516576 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:28.516580 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:28.516584 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:28.516588 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:28.516591 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:28.516595 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:28.516599 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:28.516602 | orchestrator | 2026-01-07 00:37:28.516606 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-07 00:37:28.516610 | orchestrator | Wednesday 07 January 2026 00:37:07 +0000 (0:00:02.076) 0:00:04.226 ***** 2026-01-07 00:37:28.516614 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:28.516618 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:28.516622 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:28.516626 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:28.516630 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:28.516633 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:28.516637 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:28.516641 | orchestrator | 2026-01-07 00:37:28.516644 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-07 00:37:28.516648 | orchestrator | Wednesday 07 January 2026 00:37:09 +0000 (0:00:01.820) 0:00:06.046 ***** 2026-01-07 00:37:28.516652 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-07 00:37:28.516657 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-07 00:37:28.516660 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-07 00:37:28.516664 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-07 00:37:28.516668 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-07 00:37:28.516672 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-07 00:37:28.516676 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-07 00:37:28.516680 | orchestrator | 2026-01-07 00:37:28.516684 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-07 00:37:28.516700 | orchestrator | Wednesday 07 January 2026 00:37:10 +0000 (0:00:00.993) 0:00:07.040 ***** 2026-01-07 00:37:28.516704 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 00:37:28.516708 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:37:28.516712 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:37:28.516716 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-07 00:37:28.516760 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-07 00:37:28.516767 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-07 00:37:28.516773 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-07 00:37:28.516779 | orchestrator | 2026-01-07 00:37:28.516786 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-07 00:37:28.516796 | orchestrator | Wednesday 07 January 2026 00:37:14 +0000 (0:00:03.567) 0:00:10.607 ***** 2026-01-07 00:37:28.516802 | orchestrator | changed: [testbed-manager] 2026-01-07 00:37:28.516806 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:37:28.516810 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:37:28.516814 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:37:28.516817 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:37:28.516821 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:37:28.516825 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:37:28.516829 | orchestrator | 2026-01-07 00:37:28.516832 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-07 00:37:28.516836 | orchestrator | Wednesday 07 January 2026 00:37:15 +0000 (0:00:01.647) 0:00:12.255 ***** 2026-01-07 00:37:28.516840 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:37:28.516844 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:37:28.516852 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-07 00:37:28.516856 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-07 00:37:28.516860 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 00:37:28.516864 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-07 00:37:28.516867 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-07 00:37:28.516871 | orchestrator | 2026-01-07 00:37:28.516875 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-07 00:37:28.516879 | orchestrator | Wednesday 07 January 2026 00:37:17 +0000 (0:00:01.805) 0:00:14.061 ***** 2026-01-07 00:37:28.516883 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:28.516886 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:28.516891 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:28.516894 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:28.516898 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:28.516902 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:28.516906 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:28.516909 | orchestrator | 2026-01-07 00:37:28.516913 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-07 00:37:28.516927 | orchestrator | Wednesday 07 January 2026 00:37:18 +0000 (0:00:01.146) 0:00:15.207 ***** 2026-01-07 00:37:28.516932 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:37:28.516936 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:37:28.516939 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:37:28.516943 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:37:28.516947 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:37:28.516950 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:37:28.516954 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:37:28.516958 | orchestrator | 2026-01-07 00:37:28.516962 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-07 00:37:28.516966 | orchestrator | Wednesday 07 January 2026 00:37:19 +0000 (0:00:00.652) 0:00:15.859 ***** 2026-01-07 00:37:28.516969 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:28.516973 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:28.516977 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:28.516980 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:28.516985 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:28.516988 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:28.516992 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:28.516996 | orchestrator | 2026-01-07 00:37:28.516999 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-07 00:37:28.517003 | orchestrator | Wednesday 07 January 2026 00:37:21 +0000 (0:00:02.212) 0:00:18.072 ***** 2026-01-07 00:37:28.517007 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:37:28.517011 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:37:28.517015 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:37:28.517018 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:37:28.517022 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:37:28.517026 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:37:28.517030 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-07 00:37:28.517036 | orchestrator | 2026-01-07 00:37:28.517039 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-07 00:37:28.517043 | orchestrator | Wednesday 07 January 2026 00:37:22 +0000 (0:00:00.938) 0:00:19.010 ***** 2026-01-07 00:37:28.517047 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:28.517051 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:37:28.517054 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:37:28.517058 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:37:28.517062 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:37:28.517065 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:37:28.517069 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:37:28.517073 | orchestrator | 2026-01-07 00:37:28.517077 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-07 00:37:28.517084 | orchestrator | Wednesday 07 January 2026 00:37:24 +0000 (0:00:01.719) 0:00:20.730 ***** 2026-01-07 00:37:28.517088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:37:28.517093 | orchestrator | 2026-01-07 00:37:28.517097 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-07 00:37:28.517101 | orchestrator | Wednesday 07 January 2026 00:37:25 +0000 (0:00:01.256) 0:00:21.986 ***** 2026-01-07 00:37:28.517105 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:28.517108 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:28.517112 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:28.517116 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:28.517119 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:28.517123 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:28.517127 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:28.517131 | orchestrator | 2026-01-07 00:37:28.517134 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-07 00:37:28.517138 | orchestrator | Wednesday 07 January 2026 00:37:26 +0000 (0:00:01.177) 0:00:23.164 ***** 2026-01-07 00:37:28.517142 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:28.517146 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:28.517149 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:28.517153 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:28.517157 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:28.517160 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:28.517167 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:28.517171 | orchestrator | 2026-01-07 00:37:28.517175 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-07 00:37:28.517178 | orchestrator | Wednesday 07 January 2026 00:37:27 +0000 (0:00:00.652) 0:00:23.816 ***** 2026-01-07 00:37:28.517182 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:37:28.517186 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:37:28.517190 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:37:28.517193 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:37:28.517197 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:37:28.517201 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:37:28.517204 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:37:28.517208 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:37:28.517212 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:37:28.517216 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:37:28.517219 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:37:28.517223 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:37:28.517227 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-07 00:37:28.517231 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-07 00:37:28.517234 | orchestrator | 2026-01-07 00:37:28.517241 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-07 00:37:44.149738 | orchestrator | Wednesday 07 January 2026 00:37:28 +0000 (0:00:01.266) 0:00:25.083 ***** 2026-01-07 00:37:44.149860 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:37:44.149879 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:37:44.149899 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:37:44.149919 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:37:44.149938 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:37:44.149993 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:37:44.150012 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:37:44.150101 | orchestrator | 2026-01-07 00:37:44.150122 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-07 00:37:44.150142 | orchestrator | Wednesday 07 January 2026 00:37:29 +0000 (0:00:00.663) 0:00:25.746 ***** 2026-01-07 00:37:44.150163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-5, testbed-node-2, testbed-node-4, testbed-node-3 2026-01-07 00:37:44.150187 | orchestrator | 2026-01-07 00:37:44.150206 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-07 00:37:44.150224 | orchestrator | Wednesday 07 January 2026 00:37:33 +0000 (0:00:04.570) 0:00:30.317 ***** 2026-01-07 00:37:44.150238 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:44.150255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:44.150270 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:44.150283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:44.150296 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:44.150309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:44.150337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:44.150350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:44.150364 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:44.150377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:44.150397 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:44.150443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:44.150457 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:44.150472 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:44.150485 | orchestrator | 2026-01-07 00:37:44.150498 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-07 00:37:44.150511 | orchestrator | Wednesday 07 January 2026 00:37:38 +0000 (0:00:05.065) 0:00:35.382 ***** 2026-01-07 00:37:44.150524 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:44.150537 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:44.150550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:44.150564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:44.150575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:44.150586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:44.150597 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:44.150613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-07 00:37:44.150625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:44.150636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:44.150654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:44.150665 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:44.150732 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:49.375324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-07 00:37:49.375439 | orchestrator | 2026-01-07 00:37:49.375457 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-07 00:37:49.375470 | orchestrator | Wednesday 07 January 2026 00:37:44 +0000 (0:00:05.330) 0:00:40.713 ***** 2026-01-07 00:37:49.375484 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:37:49.375495 | orchestrator | 2026-01-07 00:37:49.375507 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-07 00:37:49.375518 | orchestrator | Wednesday 07 January 2026 00:37:45 +0000 (0:00:01.089) 0:00:41.803 ***** 2026-01-07 00:37:49.375529 | orchestrator | ok: [testbed-manager] 2026-01-07 00:37:49.375541 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:37:49.375552 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:37:49.375563 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:37:49.375574 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:37:49.375584 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:37:49.375595 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:37:49.375606 | orchestrator | 2026-01-07 00:37:49.375617 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-07 00:37:49.375628 | orchestrator | Wednesday 07 January 2026 00:37:46 +0000 (0:00:01.037) 0:00:42.840 ***** 2026-01-07 00:37:49.375639 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:37:49.375650 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:37:49.375661 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:37:49.375711 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:37:49.375724 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:37:49.375736 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:37:49.375747 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:37:49.375769 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:37:49.375781 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:37:49.375792 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:37:49.375803 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:37:49.375813 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:37:49.375824 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:37:49.375860 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:37:49.375874 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:37:49.375887 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:37:49.375899 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:37:49.375928 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:37:49.375942 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:37:49.375955 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:37:49.375968 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:37:49.375980 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:37:49.375992 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:37:49.376005 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:37:49.376018 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:37:49.376031 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:37:49.376044 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:37:49.376057 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:37:49.376070 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:37:49.376083 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:37:49.376096 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-07 00:37:49.376107 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-07 00:37:49.376118 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-07 00:37:49.376129 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-07 00:37:49.376140 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:37:49.376150 | orchestrator | 2026-01-07 00:37:49.376161 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-07 00:37:49.376190 | orchestrator | Wednesday 07 January 2026 00:37:47 +0000 (0:00:01.735) 0:00:44.575 ***** 2026-01-07 00:37:49.376201 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:37:49.376212 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:37:49.376223 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:37:49.376234 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:37:49.376245 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:37:49.376256 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:37:49.376266 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:37:49.376277 | orchestrator | 2026-01-07 00:37:49.376288 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-07 00:37:49.376299 | orchestrator | Wednesday 07 January 2026 00:37:48 +0000 (0:00:00.543) 0:00:45.119 ***** 2026-01-07 00:37:49.376309 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:37:49.376321 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:37:49.376331 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:37:49.376342 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:37:49.376353 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:37:49.376363 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:37:49.376374 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:37:49.376384 | orchestrator | 2026-01-07 00:37:49.376395 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:37:49.376407 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 00:37:49.376428 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 00:37:49.376439 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 00:37:49.376450 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 00:37:49.376460 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 00:37:49.376471 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 00:37:49.376482 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 00:37:49.376493 | orchestrator | 2026-01-07 00:37:49.376503 | orchestrator | 2026-01-07 00:37:49.376514 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:37:49.376525 | orchestrator | Wednesday 07 January 2026 00:37:49 +0000 (0:00:00.588) 0:00:45.707 ***** 2026-01-07 00:37:49.376536 | orchestrator | =============================================================================== 2026-01-07 00:37:49.376547 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.33s 2026-01-07 00:37:49.376557 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.07s 2026-01-07 00:37:49.376568 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.57s 2026-01-07 00:37:49.376579 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.57s 2026-01-07 00:37:49.376590 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.21s 2026-01-07 00:37:49.376605 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.08s 2026-01-07 00:37:49.376616 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.82s 2026-01-07 00:37:49.376627 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.81s 2026-01-07 00:37:49.376638 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.74s 2026-01-07 00:37:49.376648 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.72s 2026-01-07 00:37:49.376659 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.65s 2026-01-07 00:37:49.376670 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.27s 2026-01-07 00:37:49.376710 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.26s 2026-01-07 00:37:49.376729 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.20s 2026-01-07 00:37:49.376747 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.18s 2026-01-07 00:37:49.376766 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.15s 2026-01-07 00:37:49.376778 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.09s 2026-01-07 00:37:49.376788 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.04s 2026-01-07 00:37:49.376799 | orchestrator | osism.commons.network : Create required directories --------------------- 0.99s 2026-01-07 00:37:49.376809 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.94s 2026-01-07 00:37:49.624056 | orchestrator | + osism apply wireguard 2026-01-07 00:38:01.679095 | orchestrator | 2026-01-07 00:38:01 | INFO  | Task ed7d9f53-4453-4e62-9e6a-aa9165258160 (wireguard) was prepared for execution. 2026-01-07 00:38:01.679232 | orchestrator | 2026-01-07 00:38:01 | INFO  | It takes a moment until task ed7d9f53-4453-4e62-9e6a-aa9165258160 (wireguard) has been started and output is visible here. 2026-01-07 00:38:21.972149 | orchestrator | 2026-01-07 00:38:21.972262 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-07 00:38:21.972278 | orchestrator | 2026-01-07 00:38:21.972288 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-07 00:38:21.972300 | orchestrator | Wednesday 07 January 2026 00:38:05 +0000 (0:00:00.229) 0:00:00.229 ***** 2026-01-07 00:38:21.972310 | orchestrator | ok: [testbed-manager] 2026-01-07 00:38:21.972321 | orchestrator | 2026-01-07 00:38:21.972331 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-07 00:38:21.972346 | orchestrator | Wednesday 07 January 2026 00:38:07 +0000 (0:00:01.480) 0:00:01.710 ***** 2026-01-07 00:38:21.972357 | orchestrator | changed: [testbed-manager] 2026-01-07 00:38:21.972367 | orchestrator | 2026-01-07 00:38:21.972377 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-07 00:38:21.972387 | orchestrator | Wednesday 07 January 2026 00:38:14 +0000 (0:00:06.870) 0:00:08.580 ***** 2026-01-07 00:38:21.972396 | orchestrator | changed: [testbed-manager] 2026-01-07 00:38:21.972406 | orchestrator | 2026-01-07 00:38:21.972416 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-07 00:38:21.972426 | orchestrator | Wednesday 07 January 2026 00:38:14 +0000 (0:00:00.549) 0:00:09.130 ***** 2026-01-07 00:38:21.972435 | orchestrator | changed: [testbed-manager] 2026-01-07 00:38:21.972445 | orchestrator | 2026-01-07 00:38:21.972455 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-07 00:38:21.972464 | orchestrator | Wednesday 07 January 2026 00:38:15 +0000 (0:00:00.451) 0:00:09.581 ***** 2026-01-07 00:38:21.972474 | orchestrator | ok: [testbed-manager] 2026-01-07 00:38:21.972483 | orchestrator | 2026-01-07 00:38:21.972492 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-07 00:38:21.972502 | orchestrator | Wednesday 07 January 2026 00:38:15 +0000 (0:00:00.698) 0:00:10.279 ***** 2026-01-07 00:38:21.972511 | orchestrator | ok: [testbed-manager] 2026-01-07 00:38:21.972519 | orchestrator | 2026-01-07 00:38:21.972548 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-07 00:38:21.972558 | orchestrator | Wednesday 07 January 2026 00:38:16 +0000 (0:00:00.414) 0:00:10.694 ***** 2026-01-07 00:38:21.972567 | orchestrator | ok: [testbed-manager] 2026-01-07 00:38:21.972577 | orchestrator | 2026-01-07 00:38:21.972586 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-07 00:38:21.972595 | orchestrator | Wednesday 07 January 2026 00:38:16 +0000 (0:00:00.421) 0:00:11.115 ***** 2026-01-07 00:38:21.972647 | orchestrator | changed: [testbed-manager] 2026-01-07 00:38:21.972658 | orchestrator | 2026-01-07 00:38:21.972668 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-07 00:38:21.972678 | orchestrator | Wednesday 07 January 2026 00:38:17 +0000 (0:00:01.204) 0:00:12.319 ***** 2026-01-07 00:38:21.972689 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-07 00:38:21.972700 | orchestrator | changed: [testbed-manager] 2026-01-07 00:38:21.972711 | orchestrator | 2026-01-07 00:38:21.972721 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-07 00:38:21.972732 | orchestrator | Wednesday 07 January 2026 00:38:18 +0000 (0:00:00.926) 0:00:13.246 ***** 2026-01-07 00:38:21.972742 | orchestrator | changed: [testbed-manager] 2026-01-07 00:38:21.972753 | orchestrator | 2026-01-07 00:38:21.972763 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-07 00:38:21.972773 | orchestrator | Wednesday 07 January 2026 00:38:20 +0000 (0:00:01.654) 0:00:14.900 ***** 2026-01-07 00:38:21.972784 | orchestrator | changed: [testbed-manager] 2026-01-07 00:38:21.972794 | orchestrator | 2026-01-07 00:38:21.972805 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:38:21.972815 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:38:21.972855 | orchestrator | 2026-01-07 00:38:21.972866 | orchestrator | 2026-01-07 00:38:21.972878 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:38:21.972887 | orchestrator | Wednesday 07 January 2026 00:38:21 +0000 (0:00:00.953) 0:00:15.854 ***** 2026-01-07 00:38:21.972898 | orchestrator | =============================================================================== 2026-01-07 00:38:21.972977 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.87s 2026-01-07 00:38:21.972992 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.65s 2026-01-07 00:38:21.973001 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.48s 2026-01-07 00:38:21.973011 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.20s 2026-01-07 00:38:21.973021 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.95s 2026-01-07 00:38:21.973031 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.93s 2026-01-07 00:38:21.973041 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.70s 2026-01-07 00:38:21.973051 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-01-07 00:38:21.973061 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2026-01-07 00:38:21.973071 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-01-07 00:38:21.973081 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.41s 2026-01-07 00:38:22.314297 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-07 00:38:22.352204 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-07 00:38:22.352284 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-07 00:38:22.427057 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 199 0 --:--:-- --:--:-- --:--:-- 202 2026-01-07 00:38:22.443790 | orchestrator | + osism apply --environment custom workarounds 2026-01-07 00:38:24.416806 | orchestrator | 2026-01-07 00:38:24 | INFO  | Trying to run play workarounds in environment custom 2026-01-07 00:38:34.536265 | orchestrator | 2026-01-07 00:38:34 | INFO  | Task 3b3b5262-8ae3-4608-bbaf-b9e799857f32 (workarounds) was prepared for execution. 2026-01-07 00:38:34.536425 | orchestrator | 2026-01-07 00:38:34 | INFO  | It takes a moment until task 3b3b5262-8ae3-4608-bbaf-b9e799857f32 (workarounds) has been started and output is visible here. 2026-01-07 00:38:59.604717 | orchestrator | 2026-01-07 00:38:59.604868 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:38:59.604898 | orchestrator | 2026-01-07 00:38:59.604917 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-07 00:38:59.604936 | orchestrator | Wednesday 07 January 2026 00:38:38 +0000 (0:00:00.144) 0:00:00.144 ***** 2026-01-07 00:38:59.604954 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-07 00:38:59.604974 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-07 00:38:59.604992 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-07 00:38:59.605011 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-07 00:38:59.605025 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-07 00:38:59.605037 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-07 00:38:59.605048 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-07 00:38:59.605058 | orchestrator | 2026-01-07 00:38:59.605069 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-07 00:38:59.605080 | orchestrator | 2026-01-07 00:38:59.605091 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-07 00:38:59.605128 | orchestrator | Wednesday 07 January 2026 00:38:39 +0000 (0:00:00.825) 0:00:00.970 ***** 2026-01-07 00:38:59.605140 | orchestrator | ok: [testbed-manager] 2026-01-07 00:38:59.605152 | orchestrator | 2026-01-07 00:38:59.605163 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-07 00:38:59.605174 | orchestrator | 2026-01-07 00:38:59.605185 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-07 00:38:59.605196 | orchestrator | Wednesday 07 January 2026 00:38:41 +0000 (0:00:02.194) 0:00:03.165 ***** 2026-01-07 00:38:59.605207 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:38:59.605218 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:38:59.605229 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:38:59.605241 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:38:59.605253 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:38:59.605265 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:38:59.605278 | orchestrator | 2026-01-07 00:38:59.605291 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-07 00:38:59.605304 | orchestrator | 2026-01-07 00:38:59.605316 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-07 00:38:59.605329 | orchestrator | Wednesday 07 January 2026 00:38:43 +0000 (0:00:01.888) 0:00:05.054 ***** 2026-01-07 00:38:59.605342 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:38:59.605355 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:38:59.605367 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:38:59.605395 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:38:59.605408 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:38:59.605421 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-07 00:38:59.605433 | orchestrator | 2026-01-07 00:38:59.605446 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-07 00:38:59.605460 | orchestrator | Wednesday 07 January 2026 00:38:45 +0000 (0:00:01.596) 0:00:06.650 ***** 2026-01-07 00:38:59.605472 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:38:59.605484 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:38:59.605497 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:38:59.605509 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:38:59.605548 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:38:59.605561 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:38:59.605573 | orchestrator | 2026-01-07 00:38:59.605585 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-07 00:38:59.605597 | orchestrator | Wednesday 07 January 2026 00:38:48 +0000 (0:00:03.691) 0:00:10.341 ***** 2026-01-07 00:38:59.605608 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:38:59.605618 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:38:59.605629 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:38:59.605639 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:38:59.605650 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:38:59.605661 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:38:59.605671 | orchestrator | 2026-01-07 00:38:59.605682 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-07 00:38:59.605692 | orchestrator | 2026-01-07 00:38:59.605703 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-07 00:38:59.605715 | orchestrator | Wednesday 07 January 2026 00:38:49 +0000 (0:00:00.684) 0:00:11.026 ***** 2026-01-07 00:38:59.605732 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:38:59.605759 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:38:59.605779 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:38:59.605796 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:38:59.605827 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:38:59.605845 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:38:59.605862 | orchestrator | changed: [testbed-manager] 2026-01-07 00:38:59.605880 | orchestrator | 2026-01-07 00:38:59.605898 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-07 00:38:59.605910 | orchestrator | Wednesday 07 January 2026 00:38:51 +0000 (0:00:01.575) 0:00:12.601 ***** 2026-01-07 00:38:59.605920 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:38:59.605931 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:38:59.605941 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:38:59.605952 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:38:59.605962 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:38:59.605973 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:38:59.606004 | orchestrator | changed: [testbed-manager] 2026-01-07 00:38:59.606069 | orchestrator | 2026-01-07 00:38:59.606087 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-07 00:38:59.606114 | orchestrator | Wednesday 07 January 2026 00:38:52 +0000 (0:00:01.639) 0:00:14.241 ***** 2026-01-07 00:38:59.606135 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:38:59.606153 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:38:59.606170 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:38:59.606187 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:38:59.606202 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:38:59.606220 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:38:59.606238 | orchestrator | ok: [testbed-manager] 2026-01-07 00:38:59.606255 | orchestrator | 2026-01-07 00:38:59.606272 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-07 00:38:59.606290 | orchestrator | Wednesday 07 January 2026 00:38:54 +0000 (0:00:01.547) 0:00:15.788 ***** 2026-01-07 00:38:59.606307 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:38:59.606324 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:38:59.606341 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:38:59.606359 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:38:59.606376 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:38:59.606394 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:38:59.606411 | orchestrator | changed: [testbed-manager] 2026-01-07 00:38:59.606428 | orchestrator | 2026-01-07 00:38:59.606445 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-07 00:38:59.606464 | orchestrator | Wednesday 07 January 2026 00:38:56 +0000 (0:00:01.746) 0:00:17.535 ***** 2026-01-07 00:38:59.606481 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:38:59.606498 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:38:59.606516 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:38:59.606577 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:38:59.606593 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:38:59.606609 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:38:59.606626 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:38:59.606644 | orchestrator | 2026-01-07 00:38:59.606660 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-07 00:38:59.606677 | orchestrator | 2026-01-07 00:38:59.606695 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-07 00:38:59.606731 | orchestrator | Wednesday 07 January 2026 00:38:56 +0000 (0:00:00.611) 0:00:18.146 ***** 2026-01-07 00:38:59.606750 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:38:59.606769 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:38:59.606786 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:38:59.606802 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:38:59.606818 | orchestrator | ok: [testbed-manager] 2026-01-07 00:38:59.606835 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:38:59.606851 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:38:59.606868 | orchestrator | 2026-01-07 00:38:59.606885 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:38:59.606908 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:38:59.606962 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:38:59.606984 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:38:59.607002 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:38:59.607018 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:38:59.607029 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:38:59.607040 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:38:59.607050 | orchestrator | 2026-01-07 00:38:59.607061 | orchestrator | 2026-01-07 00:38:59.607072 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:38:59.607083 | orchestrator | Wednesday 07 January 2026 00:38:59 +0000 (0:00:02.799) 0:00:20.946 ***** 2026-01-07 00:38:59.607094 | orchestrator | =============================================================================== 2026-01-07 00:38:59.607105 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.69s 2026-01-07 00:38:59.607116 | orchestrator | Install python3-docker -------------------------------------------------- 2.80s 2026-01-07 00:38:59.607127 | orchestrator | Apply netplan configuration --------------------------------------------- 2.19s 2026-01-07 00:38:59.607137 | orchestrator | Apply netplan configuration --------------------------------------------- 1.89s 2026-01-07 00:38:59.607148 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.75s 2026-01-07 00:38:59.607159 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.64s 2026-01-07 00:38:59.607170 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.60s 2026-01-07 00:38:59.607180 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.58s 2026-01-07 00:38:59.607191 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.55s 2026-01-07 00:38:59.607201 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.83s 2026-01-07 00:38:59.607212 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.68s 2026-01-07 00:38:59.607241 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.61s 2026-01-07 00:39:00.255238 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-07 00:39:12.357792 | orchestrator | 2026-01-07 00:39:12 | INFO  | Task e31de345-1600-47fb-85ef-8ec6473b793d (reboot) was prepared for execution. 2026-01-07 00:39:12.357906 | orchestrator | 2026-01-07 00:39:12 | INFO  | It takes a moment until task e31de345-1600-47fb-85ef-8ec6473b793d (reboot) has been started and output is visible here. 2026-01-07 00:39:22.774332 | orchestrator | 2026-01-07 00:39:22.774440 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:39:22.774447 | orchestrator | 2026-01-07 00:39:22.774452 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:39:22.774456 | orchestrator | Wednesday 07 January 2026 00:39:16 +0000 (0:00:00.212) 0:00:00.212 ***** 2026-01-07 00:39:22.774461 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:39:22.774466 | orchestrator | 2026-01-07 00:39:22.774488 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:39:22.774492 | orchestrator | Wednesday 07 January 2026 00:39:16 +0000 (0:00:00.106) 0:00:00.319 ***** 2026-01-07 00:39:22.774517 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:39:22.774521 | orchestrator | 2026-01-07 00:39:22.774525 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:39:22.774529 | orchestrator | Wednesday 07 January 2026 00:39:17 +0000 (0:00:00.940) 0:00:01.259 ***** 2026-01-07 00:39:22.774533 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:39:22.774537 | orchestrator | 2026-01-07 00:39:22.774541 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:39:22.774545 | orchestrator | 2026-01-07 00:39:22.774549 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:39:22.774553 | orchestrator | Wednesday 07 January 2026 00:39:17 +0000 (0:00:00.109) 0:00:01.368 ***** 2026-01-07 00:39:22.774557 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:39:22.774560 | orchestrator | 2026-01-07 00:39:22.774565 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:39:22.774568 | orchestrator | Wednesday 07 January 2026 00:39:17 +0000 (0:00:00.110) 0:00:01.478 ***** 2026-01-07 00:39:22.774572 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:39:22.774576 | orchestrator | 2026-01-07 00:39:22.774579 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:39:22.774583 | orchestrator | Wednesday 07 January 2026 00:39:18 +0000 (0:00:00.676) 0:00:02.155 ***** 2026-01-07 00:39:22.774587 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:39:22.774590 | orchestrator | 2026-01-07 00:39:22.774594 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:39:22.774598 | orchestrator | 2026-01-07 00:39:22.774602 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:39:22.774605 | orchestrator | Wednesday 07 January 2026 00:39:18 +0000 (0:00:00.138) 0:00:02.293 ***** 2026-01-07 00:39:22.774609 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:39:22.774613 | orchestrator | 2026-01-07 00:39:22.774630 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:39:22.774634 | orchestrator | Wednesday 07 January 2026 00:39:18 +0000 (0:00:00.237) 0:00:02.531 ***** 2026-01-07 00:39:22.774638 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:39:22.774642 | orchestrator | 2026-01-07 00:39:22.774646 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:39:22.774649 | orchestrator | Wednesday 07 January 2026 00:39:19 +0000 (0:00:00.692) 0:00:03.223 ***** 2026-01-07 00:39:22.774653 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:39:22.774657 | orchestrator | 2026-01-07 00:39:22.774661 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:39:22.774664 | orchestrator | 2026-01-07 00:39:22.774668 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:39:22.774672 | orchestrator | Wednesday 07 January 2026 00:39:19 +0000 (0:00:00.119) 0:00:03.343 ***** 2026-01-07 00:39:22.774675 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:39:22.774679 | orchestrator | 2026-01-07 00:39:22.774683 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:39:22.774687 | orchestrator | Wednesday 07 January 2026 00:39:19 +0000 (0:00:00.112) 0:00:03.456 ***** 2026-01-07 00:39:22.774690 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:39:22.774694 | orchestrator | 2026-01-07 00:39:22.774698 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:39:22.774701 | orchestrator | Wednesday 07 January 2026 00:39:20 +0000 (0:00:00.687) 0:00:04.144 ***** 2026-01-07 00:39:22.774705 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:39:22.774709 | orchestrator | 2026-01-07 00:39:22.774712 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:39:22.774716 | orchestrator | 2026-01-07 00:39:22.774720 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:39:22.774724 | orchestrator | Wednesday 07 January 2026 00:39:20 +0000 (0:00:00.134) 0:00:04.278 ***** 2026-01-07 00:39:22.774727 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:39:22.774735 | orchestrator | 2026-01-07 00:39:22.774739 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:39:22.774742 | orchestrator | Wednesday 07 January 2026 00:39:20 +0000 (0:00:00.103) 0:00:04.382 ***** 2026-01-07 00:39:22.774746 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:39:22.774750 | orchestrator | 2026-01-07 00:39:22.774754 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:39:22.774758 | orchestrator | Wednesday 07 January 2026 00:39:21 +0000 (0:00:00.669) 0:00:05.051 ***** 2026-01-07 00:39:22.774762 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:39:22.774766 | orchestrator | 2026-01-07 00:39:22.774769 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-07 00:39:22.774773 | orchestrator | 2026-01-07 00:39:22.774777 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-07 00:39:22.774781 | orchestrator | Wednesday 07 January 2026 00:39:21 +0000 (0:00:00.120) 0:00:05.172 ***** 2026-01-07 00:39:22.774784 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:39:22.774788 | orchestrator | 2026-01-07 00:39:22.774792 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-07 00:39:22.774795 | orchestrator | Wednesday 07 January 2026 00:39:21 +0000 (0:00:00.114) 0:00:05.287 ***** 2026-01-07 00:39:22.774799 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:39:22.774803 | orchestrator | 2026-01-07 00:39:22.774807 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-07 00:39:22.774811 | orchestrator | Wednesday 07 January 2026 00:39:22 +0000 (0:00:00.706) 0:00:05.993 ***** 2026-01-07 00:39:22.774826 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:39:22.774830 | orchestrator | 2026-01-07 00:39:22.774834 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:39:22.774839 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:22.774845 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:22.774849 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:22.774852 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:22.774856 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:22.774860 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:39:22.774864 | orchestrator | 2026-01-07 00:39:22.774867 | orchestrator | 2026-01-07 00:39:22.774871 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:39:22.774876 | orchestrator | Wednesday 07 January 2026 00:39:22 +0000 (0:00:00.049) 0:00:06.043 ***** 2026-01-07 00:39:22.774880 | orchestrator | =============================================================================== 2026-01-07 00:39:22.774885 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.37s 2026-01-07 00:39:22.774889 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.79s 2026-01-07 00:39:22.774893 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.67s 2026-01-07 00:39:23.141430 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-07 00:39:35.258753 | orchestrator | 2026-01-07 00:39:35 | INFO  | Task b59d69b0-0e14-4e86-8545-6fc4fb4c3a4c (wait-for-connection) was prepared for execution. 2026-01-07 00:39:35.258854 | orchestrator | 2026-01-07 00:39:35 | INFO  | It takes a moment until task b59d69b0-0e14-4e86-8545-6fc4fb4c3a4c (wait-for-connection) has been started and output is visible here. 2026-01-07 00:39:51.549167 | orchestrator | 2026-01-07 00:39:51.549274 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-07 00:39:51.549291 | orchestrator | 2026-01-07 00:39:51.549303 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-07 00:39:51.549314 | orchestrator | Wednesday 07 January 2026 00:39:39 +0000 (0:00:00.271) 0:00:00.271 ***** 2026-01-07 00:39:51.549325 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:39:51.549336 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:39:51.549346 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:39:51.549356 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:39:51.549366 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:39:51.549376 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:39:51.549386 | orchestrator | 2026-01-07 00:39:51.549396 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:39:51.549407 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:39:51.549472 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:39:51.549484 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:39:51.549494 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:39:51.549505 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:39:51.549515 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:39:51.549525 | orchestrator | 2026-01-07 00:39:51.549535 | orchestrator | 2026-01-07 00:39:51.549545 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:39:51.549555 | orchestrator | Wednesday 07 January 2026 00:39:51 +0000 (0:00:11.575) 0:00:11.847 ***** 2026-01-07 00:39:51.549565 | orchestrator | =============================================================================== 2026-01-07 00:39:51.549575 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.58s 2026-01-07 00:39:51.888670 | orchestrator | + osism apply hddtemp 2026-01-07 00:40:04.038846 | orchestrator | 2026-01-07 00:40:04 | INFO  | Task ecfa1a35-029b-47f7-9522-9fbdabf01dd0 (hddtemp) was prepared for execution. 2026-01-07 00:40:04.038960 | orchestrator | 2026-01-07 00:40:04 | INFO  | It takes a moment until task ecfa1a35-029b-47f7-9522-9fbdabf01dd0 (hddtemp) has been started and output is visible here. 2026-01-07 00:40:32.718212 | orchestrator | 2026-01-07 00:40:32.718333 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-07 00:40:32.718350 | orchestrator | 2026-01-07 00:40:32.718389 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-07 00:40:32.718402 | orchestrator | Wednesday 07 January 2026 00:40:07 +0000 (0:00:00.190) 0:00:00.190 ***** 2026-01-07 00:40:32.718413 | orchestrator | ok: [testbed-manager] 2026-01-07 00:40:32.718426 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:40:32.718437 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:40:32.718448 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:40:32.718459 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:40:32.718470 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:40:32.718482 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:40:32.718493 | orchestrator | 2026-01-07 00:40:32.718504 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-07 00:40:32.718516 | orchestrator | Wednesday 07 January 2026 00:40:08 +0000 (0:00:00.544) 0:00:00.734 ***** 2026-01-07 00:40:32.718529 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:40:32.718568 | orchestrator | 2026-01-07 00:40:32.718580 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-07 00:40:32.718592 | orchestrator | Wednesday 07 January 2026 00:40:09 +0000 (0:00:01.043) 0:00:01.777 ***** 2026-01-07 00:40:32.718602 | orchestrator | ok: [testbed-manager] 2026-01-07 00:40:32.718614 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:40:32.718625 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:40:32.718636 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:40:32.718647 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:40:32.718658 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:40:32.718669 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:40:32.718682 | orchestrator | 2026-01-07 00:40:32.718695 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-07 00:40:32.718707 | orchestrator | Wednesday 07 January 2026 00:40:11 +0000 (0:00:01.811) 0:00:03.589 ***** 2026-01-07 00:40:32.718720 | orchestrator | changed: [testbed-manager] 2026-01-07 00:40:32.718734 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:40:32.718747 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:40:32.718759 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:40:32.718771 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:40:32.718784 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:40:32.718802 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:40:32.718821 | orchestrator | 2026-01-07 00:40:32.718841 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-07 00:40:32.718884 | orchestrator | Wednesday 07 January 2026 00:40:12 +0000 (0:00:01.069) 0:00:04.658 ***** 2026-01-07 00:40:32.718911 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:40:32.718929 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:40:32.718948 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:40:32.718967 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:40:32.718985 | orchestrator | ok: [testbed-manager] 2026-01-07 00:40:32.719005 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:40:32.719024 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:40:32.719044 | orchestrator | 2026-01-07 00:40:32.719062 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-07 00:40:32.719080 | orchestrator | Wednesday 07 January 2026 00:40:13 +0000 (0:00:01.172) 0:00:05.831 ***** 2026-01-07 00:40:32.719099 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:40:32.719118 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:40:32.719138 | orchestrator | changed: [testbed-manager] 2026-01-07 00:40:32.719156 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:40:32.719175 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:40:32.719194 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:40:32.719213 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:40:32.719232 | orchestrator | 2026-01-07 00:40:32.719250 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-07 00:40:32.719270 | orchestrator | Wednesday 07 January 2026 00:40:14 +0000 (0:00:00.880) 0:00:06.712 ***** 2026-01-07 00:40:32.719288 | orchestrator | changed: [testbed-manager] 2026-01-07 00:40:32.719308 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:40:32.719327 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:40:32.719347 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:40:32.719392 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:40:32.719410 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:40:32.719427 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:40:32.719445 | orchestrator | 2026-01-07 00:40:32.719464 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-07 00:40:32.719483 | orchestrator | Wednesday 07 January 2026 00:40:29 +0000 (0:00:14.653) 0:00:21.365 ***** 2026-01-07 00:40:32.719502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:40:32.719539 | orchestrator | 2026-01-07 00:40:32.719559 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-07 00:40:32.719577 | orchestrator | Wednesday 07 January 2026 00:40:30 +0000 (0:00:01.206) 0:00:22.572 ***** 2026-01-07 00:40:32.719596 | orchestrator | changed: [testbed-manager] 2026-01-07 00:40:32.719616 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:40:32.719634 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:40:32.719652 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:40:32.719667 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:40:32.719678 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:40:32.719689 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:40:32.719705 | orchestrator | 2026-01-07 00:40:32.719724 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:40:32.719742 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:40:32.719787 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:40:32.719809 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:40:32.719829 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:40:32.719849 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:40:32.719867 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:40:32.719885 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:40:32.719904 | orchestrator | 2026-01-07 00:40:32.719922 | orchestrator | 2026-01-07 00:40:32.719939 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:40:32.719958 | orchestrator | Wednesday 07 January 2026 00:40:32 +0000 (0:00:01.921) 0:00:24.493 ***** 2026-01-07 00:40:32.719976 | orchestrator | =============================================================================== 2026-01-07 00:40:32.719994 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.65s 2026-01-07 00:40:32.720012 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.92s 2026-01-07 00:40:32.720031 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.81s 2026-01-07 00:40:32.720049 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.21s 2026-01-07 00:40:32.720068 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.17s 2026-01-07 00:40:32.720087 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.07s 2026-01-07 00:40:32.720105 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.04s 2026-01-07 00:40:32.720122 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.88s 2026-01-07 00:40:32.720142 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.54s 2026-01-07 00:40:33.072842 | orchestrator | ++ semver 9.5.0 7.1.1 2026-01-07 00:40:33.112449 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-07 00:40:33.112549 | orchestrator | + sudo systemctl restart manager.service 2026-01-07 00:40:46.522600 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-07 00:40:46.522736 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-07 00:40:46.522754 | orchestrator | + local max_attempts=60 2026-01-07 00:40:46.522798 | orchestrator | + local name=ceph-ansible 2026-01-07 00:40:46.522810 | orchestrator | + local attempt_num=1 2026-01-07 00:40:46.522822 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:40:46.558920 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:40:46.559007 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:40:46.559021 | orchestrator | + sleep 5 2026-01-07 00:40:51.562432 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:40:51.614473 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:40:51.614570 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:40:51.614585 | orchestrator | + sleep 5 2026-01-07 00:40:56.617420 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:40:56.648791 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:40:56.648887 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:40:56.648909 | orchestrator | + sleep 5 2026-01-07 00:41:01.652978 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:41:01.694732 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:01.694831 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:41:01.694847 | orchestrator | + sleep 5 2026-01-07 00:41:06.699885 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:41:06.742127 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:06.742241 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:41:06.742255 | orchestrator | + sleep 5 2026-01-07 00:41:11.747734 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:41:11.787248 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:11.787414 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:41:11.787431 | orchestrator | + sleep 5 2026-01-07 00:41:16.793242 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:41:16.834370 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:16.834507 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:41:16.834524 | orchestrator | + sleep 5 2026-01-07 00:41:21.840399 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:41:21.872517 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:21.872626 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:41:21.872637 | orchestrator | + sleep 5 2026-01-07 00:41:26.877706 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:41:26.957057 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:26.957190 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:41:26.957206 | orchestrator | + sleep 5 2026-01-07 00:41:31.960663 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:41:31.998172 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:31.998227 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:41:31.998240 | orchestrator | + sleep 5 2026-01-07 00:41:37.003871 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:41:37.044973 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:37.045067 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:41:37.045081 | orchestrator | + sleep 5 2026-01-07 00:41:42.050098 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:41:42.096705 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:42.096775 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:41:42.096782 | orchestrator | + sleep 5 2026-01-07 00:41:47.102765 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:41:47.143081 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:47.143157 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-07 00:41:47.143166 | orchestrator | + sleep 5 2026-01-07 00:41:52.147521 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-07 00:41:52.190639 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:52.190738 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-07 00:41:52.190754 | orchestrator | + local max_attempts=60 2026-01-07 00:41:52.190767 | orchestrator | + local name=kolla-ansible 2026-01-07 00:41:52.190779 | orchestrator | + local attempt_num=1 2026-01-07 00:41:52.191401 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-07 00:41:52.220927 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:52.221028 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-07 00:41:52.221050 | orchestrator | + local max_attempts=60 2026-01-07 00:41:52.221069 | orchestrator | + local name=osism-ansible 2026-01-07 00:41:52.221086 | orchestrator | + local attempt_num=1 2026-01-07 00:41:52.221415 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-07 00:41:52.255497 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-07 00:41:52.255590 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-07 00:41:52.255603 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-07 00:41:52.431146 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-07 00:41:52.584488 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-07 00:41:52.745682 | orchestrator | ARA in osism-ansible already disabled. 2026-01-07 00:41:52.919321 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-07 00:41:52.919425 | orchestrator | + osism apply gather-facts 2026-01-07 00:42:05.015786 | orchestrator | 2026-01-07 00:42:05 | INFO  | Task 841bc9c3-bf37-4492-80d7-ad88cb7d5b01 (gather-facts) was prepared for execution. 2026-01-07 00:42:05.015922 | orchestrator | 2026-01-07 00:42:05 | INFO  | It takes a moment until task 841bc9c3-bf37-4492-80d7-ad88cb7d5b01 (gather-facts) has been started and output is visible here. 2026-01-07 00:42:19.302641 | orchestrator | 2026-01-07 00:42:19.302759 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-07 00:42:19.302774 | orchestrator | 2026-01-07 00:42:19.302784 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:42:19.302793 | orchestrator | Wednesday 07 January 2026 00:42:08 +0000 (0:00:00.205) 0:00:00.205 ***** 2026-01-07 00:42:19.302801 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:42:19.302810 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:42:19.302818 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:42:19.302827 | orchestrator | ok: [testbed-manager] 2026-01-07 00:42:19.302835 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:42:19.302843 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:42:19.302851 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:42:19.302859 | orchestrator | 2026-01-07 00:42:19.302867 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-07 00:42:19.302875 | orchestrator | 2026-01-07 00:42:19.302883 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-07 00:42:19.302891 | orchestrator | Wednesday 07 January 2026 00:42:18 +0000 (0:00:09.491) 0:00:09.697 ***** 2026-01-07 00:42:19.302900 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:42:19.302908 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:42:19.302917 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:42:19.302925 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:42:19.302934 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:19.302941 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:19.302949 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:19.302957 | orchestrator | 2026-01-07 00:42:19.302965 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:42:19.302973 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:42:19.302983 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:42:19.302991 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:42:19.302999 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:42:19.303007 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:42:19.303015 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:42:19.303051 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 00:42:19.303059 | orchestrator | 2026-01-07 00:42:19.303067 | orchestrator | 2026-01-07 00:42:19.303075 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:42:19.303083 | orchestrator | Wednesday 07 January 2026 00:42:18 +0000 (0:00:00.565) 0:00:10.263 ***** 2026-01-07 00:42:19.303091 | orchestrator | =============================================================================== 2026-01-07 00:42:19.303099 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.49s 2026-01-07 00:42:19.303107 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2026-01-07 00:42:19.597584 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-07 00:42:19.612883 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-07 00:42:19.630551 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-07 00:42:19.643473 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-07 00:42:19.654506 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-07 00:42:19.665744 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-07 00:42:19.687027 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-07 00:42:19.702078 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-07 00:42:19.715143 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-07 00:42:19.726751 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-07 00:42:19.736537 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-07 00:42:19.745830 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-07 00:42:19.755079 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-07 00:42:19.764798 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-07 00:42:19.774932 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-07 00:42:19.784310 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-07 00:42:19.796034 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-07 00:42:19.805622 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-07 00:42:19.815748 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-07 00:42:19.825718 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-07 00:42:19.840582 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-07 00:42:20.013095 | orchestrator | ok: Runtime: 0:24:16.802808 2026-01-07 00:42:20.151137 | 2026-01-07 00:42:20.151287 | TASK [Deploy services] 2026-01-07 00:42:20.683780 | orchestrator | skipping: Conditional result was False 2026-01-07 00:42:20.701456 | 2026-01-07 00:42:20.701626 | TASK [Deploy in a nutshell] 2026-01-07 00:42:21.453908 | orchestrator | + set -e 2026-01-07 00:42:21.455515 | orchestrator | 2026-01-07 00:42:21.455542 | orchestrator | # PULL IMAGES 2026-01-07 00:42:21.455547 | orchestrator | 2026-01-07 00:42:21.455561 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-07 00:42:21.455571 | orchestrator | ++ export INTERACTIVE=false 2026-01-07 00:42:21.455577 | orchestrator | ++ INTERACTIVE=false 2026-01-07 00:42:21.455600 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-07 00:42:21.455610 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-07 00:42:21.455616 | orchestrator | + source /opt/manager-vars.sh 2026-01-07 00:42:21.455621 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-07 00:42:21.455629 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-07 00:42:21.455633 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-07 00:42:21.455640 | orchestrator | ++ CEPH_VERSION=reef 2026-01-07 00:42:21.455645 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-07 00:42:21.455652 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-07 00:42:21.455656 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-07 00:42:21.455663 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-07 00:42:21.455667 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-07 00:42:21.455672 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-07 00:42:21.455676 | orchestrator | ++ export ARA=false 2026-01-07 00:42:21.455679 | orchestrator | ++ ARA=false 2026-01-07 00:42:21.455683 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-07 00:42:21.455687 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-07 00:42:21.455691 | orchestrator | ++ export TEMPEST=true 2026-01-07 00:42:21.455695 | orchestrator | ++ TEMPEST=true 2026-01-07 00:42:21.455698 | orchestrator | ++ export IS_ZUUL=true 2026-01-07 00:42:21.455702 | orchestrator | ++ IS_ZUUL=true 2026-01-07 00:42:21.455706 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.241 2026-01-07 00:42:21.455710 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.241 2026-01-07 00:42:21.455714 | orchestrator | ++ export EXTERNAL_API=false 2026-01-07 00:42:21.455718 | orchestrator | ++ EXTERNAL_API=false 2026-01-07 00:42:21.455721 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-07 00:42:21.455725 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-07 00:42:21.455729 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-07 00:42:21.455733 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-07 00:42:21.455737 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-07 00:42:21.455747 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-07 00:42:21.455754 | orchestrator | + echo 2026-01-07 00:42:21.455760 | orchestrator | + echo '# PULL IMAGES' 2026-01-07 00:42:21.455764 | orchestrator | + echo 2026-01-07 00:42:21.455797 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-07 00:42:21.516783 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-07 00:42:21.516842 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-07 00:42:23.463360 | orchestrator | 2026-01-07 00:42:23 | INFO  | Trying to run play pull-images in environment custom 2026-01-07 00:42:33.552081 | orchestrator | 2026-01-07 00:42:33 | INFO  | Task 279b6898-849b-47ed-9a61-c83f8529f394 (pull-images) was prepared for execution. 2026-01-07 00:42:33.552291 | orchestrator | 2026-01-07 00:42:33 | INFO  | Task 279b6898-849b-47ed-9a61-c83f8529f394 is running in background. No more output. Check ARA for logs. 2026-01-07 00:42:35.870906 | orchestrator | 2026-01-07 00:42:35 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-07 00:42:46.090850 | orchestrator | 2026-01-07 00:42:46 | INFO  | Task e345ea58-58dd-43b3-9a80-c74dbcdfcce4 (wipe-partitions) was prepared for execution. 2026-01-07 00:42:46.090960 | orchestrator | 2026-01-07 00:42:46 | INFO  | It takes a moment until task e345ea58-58dd-43b3-9a80-c74dbcdfcce4 (wipe-partitions) has been started and output is visible here. 2026-01-07 00:42:58.844777 | orchestrator | 2026-01-07 00:42:58.844904 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-07 00:42:58.844931 | orchestrator | 2026-01-07 00:42:58.844949 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-07 00:42:58.844978 | orchestrator | Wednesday 07 January 2026 00:42:50 +0000 (0:00:00.125) 0:00:00.125 ***** 2026-01-07 00:42:58.844999 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:42:58.845020 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:42:58.845036 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:42:58.845047 | orchestrator | 2026-01-07 00:42:58.845058 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-07 00:42:58.845099 | orchestrator | Wednesday 07 January 2026 00:42:50 +0000 (0:00:00.569) 0:00:00.695 ***** 2026-01-07 00:42:58.845112 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:58.845131 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:58.845150 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:58.845209 | orchestrator | 2026-01-07 00:42:58.845231 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-07 00:42:58.845251 | orchestrator | Wednesday 07 January 2026 00:42:51 +0000 (0:00:00.366) 0:00:01.062 ***** 2026-01-07 00:42:58.845270 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:42:58.845285 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:42:58.845299 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:42:58.845312 | orchestrator | 2026-01-07 00:42:58.845325 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-07 00:42:58.845338 | orchestrator | Wednesday 07 January 2026 00:42:51 +0000 (0:00:00.584) 0:00:01.646 ***** 2026-01-07 00:42:58.845350 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:42:58.845364 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:42:58.845377 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:42:58.845390 | orchestrator | 2026-01-07 00:42:58.845410 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-07 00:42:58.845431 | orchestrator | Wednesday 07 January 2026 00:42:52 +0000 (0:00:00.255) 0:00:01.901 ***** 2026-01-07 00:42:58.845451 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-07 00:42:58.845475 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-07 00:42:58.845489 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-07 00:42:58.845502 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-07 00:42:58.845515 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-07 00:42:58.845527 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-07 00:42:58.845540 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-07 00:42:58.845553 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-07 00:42:58.845565 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-07 00:42:58.845578 | orchestrator | 2026-01-07 00:42:58.845591 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-07 00:42:58.845604 | orchestrator | Wednesday 07 January 2026 00:42:53 +0000 (0:00:01.188) 0:00:03.090 ***** 2026-01-07 00:42:58.845619 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-07 00:42:58.845632 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-07 00:42:58.845643 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-07 00:42:58.845654 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-07 00:42:58.845665 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-07 00:42:58.845675 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-07 00:42:58.845686 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-07 00:42:58.845696 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-07 00:42:58.845707 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-07 00:42:58.845717 | orchestrator | 2026-01-07 00:42:58.845729 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-07 00:42:58.845740 | orchestrator | Wednesday 07 January 2026 00:42:54 +0000 (0:00:01.565) 0:00:04.655 ***** 2026-01-07 00:42:58.845750 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-07 00:42:58.845761 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-07 00:42:58.845772 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-07 00:42:58.845782 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-07 00:42:58.845793 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-07 00:42:58.845803 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-07 00:42:58.845814 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-07 00:42:58.845824 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-07 00:42:58.845852 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-07 00:42:58.845863 | orchestrator | 2026-01-07 00:42:58.845874 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-07 00:42:58.845884 | orchestrator | Wednesday 07 January 2026 00:42:57 +0000 (0:00:02.323) 0:00:06.979 ***** 2026-01-07 00:42:58.845895 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:42:58.845906 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:42:58.845917 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:42:58.845927 | orchestrator | 2026-01-07 00:42:58.845938 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-07 00:42:58.845948 | orchestrator | Wednesday 07 January 2026 00:42:57 +0000 (0:00:00.627) 0:00:07.607 ***** 2026-01-07 00:42:58.845960 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:42:58.845970 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:42:58.845981 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:42:58.845991 | orchestrator | 2026-01-07 00:42:58.846002 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:42:58.846014 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:42:58.846094 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:42:58.846126 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:42:58.846138 | orchestrator | 2026-01-07 00:42:58.846149 | orchestrator | 2026-01-07 00:42:58.846159 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:42:58.846223 | orchestrator | Wednesday 07 January 2026 00:42:58 +0000 (0:00:00.648) 0:00:08.256 ***** 2026-01-07 00:42:58.846234 | orchestrator | =============================================================================== 2026-01-07 00:42:58.846351 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.32s 2026-01-07 00:42:58.846373 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.57s 2026-01-07 00:42:58.846393 | orchestrator | Check device availability ----------------------------------------------- 1.19s 2026-01-07 00:42:58.846412 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2026-01-07 00:42:58.846430 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2026-01-07 00:42:58.846449 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.58s 2026-01-07 00:42:58.846466 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2026-01-07 00:42:58.846477 | orchestrator | Remove all rook related logical devices --------------------------------- 0.37s 2026-01-07 00:42:58.846487 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2026-01-07 00:43:11.112021 | orchestrator | 2026-01-07 00:43:11 | INFO  | Task ca186692-1c20-49c5-86fd-5bf4ae525868 (facts) was prepared for execution. 2026-01-07 00:43:11.112140 | orchestrator | 2026-01-07 00:43:11 | INFO  | It takes a moment until task ca186692-1c20-49c5-86fd-5bf4ae525868 (facts) has been started and output is visible here. 2026-01-07 00:43:23.581439 | orchestrator | 2026-01-07 00:43:23.581567 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-07 00:43:23.581585 | orchestrator | 2026-01-07 00:43:23.581596 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-07 00:43:23.581607 | orchestrator | Wednesday 07 January 2026 00:43:15 +0000 (0:00:00.261) 0:00:00.261 ***** 2026-01-07 00:43:23.581618 | orchestrator | ok: [testbed-manager] 2026-01-07 00:43:23.581630 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:43:23.581642 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:43:23.581653 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:43:23.581690 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:43:23.581701 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:43:23.581711 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:43:23.581721 | orchestrator | 2026-01-07 00:43:23.581732 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-07 00:43:23.581743 | orchestrator | Wednesday 07 January 2026 00:43:16 +0000 (0:00:01.106) 0:00:01.367 ***** 2026-01-07 00:43:23.581754 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:43:23.581762 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:43:23.581768 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:43:23.581774 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:43:23.581780 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:23.581786 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:23.581793 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:23.581799 | orchestrator | 2026-01-07 00:43:23.581805 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-07 00:43:23.581811 | orchestrator | 2026-01-07 00:43:23.581831 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:43:23.581837 | orchestrator | Wednesday 07 January 2026 00:43:17 +0000 (0:00:01.256) 0:00:02.624 ***** 2026-01-07 00:43:23.581843 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:43:23.581849 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:43:23.581856 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:43:23.581863 | orchestrator | ok: [testbed-manager] 2026-01-07 00:43:23.581869 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:43:23.581875 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:43:23.581881 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:43:23.581887 | orchestrator | 2026-01-07 00:43:23.581893 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-07 00:43:23.581899 | orchestrator | 2026-01-07 00:43:23.581906 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-07 00:43:23.581912 | orchestrator | Wednesday 07 January 2026 00:43:22 +0000 (0:00:04.972) 0:00:07.596 ***** 2026-01-07 00:43:23.581918 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:43:23.581924 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:43:23.581930 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:43:23.581936 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:43:23.581942 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:23.581948 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:23.581954 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:43:23.581960 | orchestrator | 2026-01-07 00:43:23.581966 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:43:23.581973 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:43:23.581981 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:43:23.581988 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:43:23.581994 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:43:23.582000 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:43:23.582006 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:43:23.582059 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:43:23.582068 | orchestrator | 2026-01-07 00:43:23.582079 | orchestrator | 2026-01-07 00:43:23.582089 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:43:23.582108 | orchestrator | Wednesday 07 January 2026 00:43:23 +0000 (0:00:00.522) 0:00:08.118 ***** 2026-01-07 00:43:23.582119 | orchestrator | =============================================================================== 2026-01-07 00:43:23.582128 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.97s 2026-01-07 00:43:23.582137 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.26s 2026-01-07 00:43:23.582148 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2026-01-07 00:43:23.582181 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-01-07 00:43:25.987730 | orchestrator | 2026-01-07 00:43:25 | INFO  | Task 01f3336e-9767-4e93-a6bc-0c025fb94035 (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-07 00:43:25.987848 | orchestrator | 2026-01-07 00:43:25 | INFO  | It takes a moment until task 01f3336e-9767-4e93-a6bc-0c025fb94035 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-07 00:43:37.788374 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-07 00:43:37.788495 | orchestrator | 2.16.14 2026-01-07 00:43:37.788507 | orchestrator | 2026-01-07 00:43:37.788517 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-07 00:43:37.788526 | orchestrator | 2026-01-07 00:43:37.788534 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:43:37.788542 | orchestrator | Wednesday 07 January 2026 00:43:30 +0000 (0:00:00.306) 0:00:00.306 ***** 2026-01-07 00:43:37.788551 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-07 00:43:37.788559 | orchestrator | 2026-01-07 00:43:37.788566 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:43:37.788574 | orchestrator | Wednesday 07 January 2026 00:43:30 +0000 (0:00:00.253) 0:00:00.560 ***** 2026-01-07 00:43:37.788581 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:43:37.788589 | orchestrator | 2026-01-07 00:43:37.788596 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:37.788603 | orchestrator | Wednesday 07 January 2026 00:43:30 +0000 (0:00:00.217) 0:00:00.778 ***** 2026-01-07 00:43:37.788611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-07 00:43:37.788628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-07 00:43:37.788636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-07 00:43:37.788643 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-07 00:43:37.788650 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-07 00:43:37.788657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-07 00:43:37.788665 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-07 00:43:37.788672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-07 00:43:37.788679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-07 00:43:37.788686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-07 00:43:37.788694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-07 00:43:37.788701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-07 00:43:37.788708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-07 00:43:37.788715 | orchestrator | 2026-01-07 00:43:37.788722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:37.788730 | orchestrator | Wednesday 07 January 2026 00:43:31 +0000 (0:00:00.506) 0:00:01.284 ***** 2026-01-07 00:43:37.788758 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:37.788766 | orchestrator | 2026-01-07 00:43:37.788774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:37.788781 | orchestrator | Wednesday 07 January 2026 00:43:31 +0000 (0:00:00.209) 0:00:01.493 ***** 2026-01-07 00:43:37.788788 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:37.788795 | orchestrator | 2026-01-07 00:43:37.788803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:37.788810 | orchestrator | Wednesday 07 January 2026 00:43:31 +0000 (0:00:00.201) 0:00:01.694 ***** 2026-01-07 00:43:37.788817 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:37.788824 | orchestrator | 2026-01-07 00:43:37.788831 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:37.788839 | orchestrator | Wednesday 07 January 2026 00:43:31 +0000 (0:00:00.220) 0:00:01.915 ***** 2026-01-07 00:43:37.788850 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:37.788857 | orchestrator | 2026-01-07 00:43:37.788864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:37.788872 | orchestrator | Wednesday 07 January 2026 00:43:32 +0000 (0:00:00.203) 0:00:02.119 ***** 2026-01-07 00:43:37.788879 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:37.788886 | orchestrator | 2026-01-07 00:43:37.788895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:37.788904 | orchestrator | Wednesday 07 January 2026 00:43:32 +0000 (0:00:00.197) 0:00:02.317 ***** 2026-01-07 00:43:37.788912 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:37.788920 | orchestrator | 2026-01-07 00:43:37.788929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:37.788938 | orchestrator | Wednesday 07 January 2026 00:43:32 +0000 (0:00:00.194) 0:00:02.512 ***** 2026-01-07 00:43:37.788946 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:37.788954 | orchestrator | 2026-01-07 00:43:37.788963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:37.788971 | orchestrator | Wednesday 07 January 2026 00:43:32 +0000 (0:00:00.198) 0:00:02.711 ***** 2026-01-07 00:43:37.788979 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:37.788988 | orchestrator | 2026-01-07 00:43:37.788996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:37.789004 | orchestrator | Wednesday 07 January 2026 00:43:32 +0000 (0:00:00.189) 0:00:02.900 ***** 2026-01-07 00:43:37.789012 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67) 2026-01-07 00:43:37.789021 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67) 2026-01-07 00:43:37.789030 | orchestrator | 2026-01-07 00:43:37.789038 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:37.789059 | orchestrator | Wednesday 07 January 2026 00:43:33 +0000 (0:00:00.392) 0:00:03.293 ***** 2026-01-07 00:43:37.789068 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0dd21d7e-182d-4e2a-b2dc-5d8af31fa2ef) 2026-01-07 00:43:37.789081 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0dd21d7e-182d-4e2a-b2dc-5d8af31fa2ef) 2026-01-07 00:43:37.789090 | orchestrator | 2026-01-07 00:43:37.789099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:37.789107 | orchestrator | Wednesday 07 January 2026 00:43:33 +0000 (0:00:00.685) 0:00:03.979 ***** 2026-01-07 00:43:37.789115 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c52f0d9f-ed72-456f-8893-789cce9c22ff) 2026-01-07 00:43:37.789124 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c52f0d9f-ed72-456f-8893-789cce9c22ff) 2026-01-07 00:43:37.789133 | orchestrator | 2026-01-07 00:43:37.789141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:37.789167 | orchestrator | Wednesday 07 January 2026 00:43:34 +0000 (0:00:00.709) 0:00:04.689 ***** 2026-01-07 00:43:37.789180 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_17558d9b-0f92-44fa-9888-3d1d3136e2b9) 2026-01-07 00:43:37.789188 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_17558d9b-0f92-44fa-9888-3d1d3136e2b9) 2026-01-07 00:43:37.789195 | orchestrator | 2026-01-07 00:43:37.789202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:37.789209 | orchestrator | Wednesday 07 January 2026 00:43:35 +0000 (0:00:00.977) 0:00:05.666 ***** 2026-01-07 00:43:37.789216 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:43:37.789223 | orchestrator | 2026-01-07 00:43:37.789231 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:37.789238 | orchestrator | Wednesday 07 January 2026 00:43:35 +0000 (0:00:00.302) 0:00:05.969 ***** 2026-01-07 00:43:37.789245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-07 00:43:37.789252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-07 00:43:37.789259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-07 00:43:37.789266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-07 00:43:37.789273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-07 00:43:37.789280 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-07 00:43:37.789287 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-07 00:43:37.789294 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-07 00:43:37.789301 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-07 00:43:37.789308 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-07 00:43:37.789315 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-07 00:43:37.789322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-07 00:43:37.789330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-07 00:43:37.789337 | orchestrator | 2026-01-07 00:43:37.789344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:37.789351 | orchestrator | Wednesday 07 January 2026 00:43:36 +0000 (0:00:00.386) 0:00:06.355 ***** 2026-01-07 00:43:37.789358 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:37.789365 | orchestrator | 2026-01-07 00:43:37.789373 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:37.789380 | orchestrator | Wednesday 07 January 2026 00:43:36 +0000 (0:00:00.230) 0:00:06.586 ***** 2026-01-07 00:43:37.789387 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:37.789394 | orchestrator | 2026-01-07 00:43:37.789401 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:37.789408 | orchestrator | Wednesday 07 January 2026 00:43:36 +0000 (0:00:00.185) 0:00:06.772 ***** 2026-01-07 00:43:37.789415 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:37.789422 | orchestrator | 2026-01-07 00:43:37.789430 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:37.789437 | orchestrator | Wednesday 07 January 2026 00:43:36 +0000 (0:00:00.221) 0:00:06.994 ***** 2026-01-07 00:43:37.789444 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:37.789451 | orchestrator | 2026-01-07 00:43:37.789458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:37.789465 | orchestrator | Wednesday 07 January 2026 00:43:37 +0000 (0:00:00.197) 0:00:07.191 ***** 2026-01-07 00:43:37.789477 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:37.789484 | orchestrator | 2026-01-07 00:43:37.789492 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:37.789499 | orchestrator | Wednesday 07 January 2026 00:43:37 +0000 (0:00:00.195) 0:00:07.386 ***** 2026-01-07 00:43:37.789506 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:37.789513 | orchestrator | 2026-01-07 00:43:37.789520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:37.789527 | orchestrator | Wednesday 07 January 2026 00:43:37 +0000 (0:00:00.207) 0:00:07.594 ***** 2026-01-07 00:43:37.789534 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:37.789541 | orchestrator | 2026-01-07 00:43:37.789553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:45.462470 | orchestrator | Wednesday 07 January 2026 00:43:37 +0000 (0:00:00.209) 0:00:07.804 ***** 2026-01-07 00:43:45.462609 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:45.462629 | orchestrator | 2026-01-07 00:43:45.462642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:45.462653 | orchestrator | Wednesday 07 January 2026 00:43:37 +0000 (0:00:00.206) 0:00:08.011 ***** 2026-01-07 00:43:45.462665 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-07 00:43:45.462699 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-07 00:43:45.462711 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-07 00:43:45.462723 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-07 00:43:45.462733 | orchestrator | 2026-01-07 00:43:45.462745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:45.462757 | orchestrator | Wednesday 07 January 2026 00:43:38 +0000 (0:00:00.980) 0:00:08.991 ***** 2026-01-07 00:43:45.462768 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:45.462778 | orchestrator | 2026-01-07 00:43:45.462790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:45.462800 | orchestrator | Wednesday 07 January 2026 00:43:39 +0000 (0:00:00.203) 0:00:09.195 ***** 2026-01-07 00:43:45.462811 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:45.462822 | orchestrator | 2026-01-07 00:43:45.462833 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:45.462844 | orchestrator | Wednesday 07 January 2026 00:43:39 +0000 (0:00:00.203) 0:00:09.398 ***** 2026-01-07 00:43:45.462855 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:45.462866 | orchestrator | 2026-01-07 00:43:45.462877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:45.462888 | orchestrator | Wednesday 07 January 2026 00:43:39 +0000 (0:00:00.210) 0:00:09.609 ***** 2026-01-07 00:43:45.462898 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:45.462909 | orchestrator | 2026-01-07 00:43:45.462920 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-07 00:43:45.462931 | orchestrator | Wednesday 07 January 2026 00:43:39 +0000 (0:00:00.203) 0:00:09.812 ***** 2026-01-07 00:43:45.462942 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-07 00:43:45.462953 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-07 00:43:45.462964 | orchestrator | 2026-01-07 00:43:45.462977 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-07 00:43:45.462990 | orchestrator | Wednesday 07 January 2026 00:43:39 +0000 (0:00:00.189) 0:00:10.002 ***** 2026-01-07 00:43:45.463003 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:45.463015 | orchestrator | 2026-01-07 00:43:45.463027 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-07 00:43:45.463040 | orchestrator | Wednesday 07 January 2026 00:43:40 +0000 (0:00:00.139) 0:00:10.141 ***** 2026-01-07 00:43:45.463052 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:45.463064 | orchestrator | 2026-01-07 00:43:45.463077 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-07 00:43:45.463089 | orchestrator | Wednesday 07 January 2026 00:43:40 +0000 (0:00:00.139) 0:00:10.281 ***** 2026-01-07 00:43:45.463127 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:45.463140 | orchestrator | 2026-01-07 00:43:45.463180 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-07 00:43:45.463193 | orchestrator | Wednesday 07 January 2026 00:43:40 +0000 (0:00:00.154) 0:00:10.435 ***** 2026-01-07 00:43:45.463205 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:43:45.463218 | orchestrator | 2026-01-07 00:43:45.463230 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-07 00:43:45.463243 | orchestrator | Wednesday 07 January 2026 00:43:40 +0000 (0:00:00.163) 0:00:10.599 ***** 2026-01-07 00:43:45.463258 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '29ea93ed-0a9a-5585-8fd4-59056229f60b'}}) 2026-01-07 00:43:45.463271 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6ed406c7-6b31-5121-9e07-a95f5a11b8c1'}}) 2026-01-07 00:43:45.463283 | orchestrator | 2026-01-07 00:43:45.463296 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-07 00:43:45.463310 | orchestrator | Wednesday 07 January 2026 00:43:40 +0000 (0:00:00.165) 0:00:10.765 ***** 2026-01-07 00:43:45.463325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '29ea93ed-0a9a-5585-8fd4-59056229f60b'}})  2026-01-07 00:43:45.463347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6ed406c7-6b31-5121-9e07-a95f5a11b8c1'}})  2026-01-07 00:43:45.463358 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:45.463369 | orchestrator | 2026-01-07 00:43:45.463380 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-07 00:43:45.463391 | orchestrator | Wednesday 07 January 2026 00:43:40 +0000 (0:00:00.152) 0:00:10.918 ***** 2026-01-07 00:43:45.463401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '29ea93ed-0a9a-5585-8fd4-59056229f60b'}})  2026-01-07 00:43:45.463412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6ed406c7-6b31-5121-9e07-a95f5a11b8c1'}})  2026-01-07 00:43:45.463423 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:45.463433 | orchestrator | 2026-01-07 00:43:45.463444 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-07 00:43:45.463455 | orchestrator | Wednesday 07 January 2026 00:43:41 +0000 (0:00:00.429) 0:00:11.347 ***** 2026-01-07 00:43:45.463466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '29ea93ed-0a9a-5585-8fd4-59056229f60b'}})  2026-01-07 00:43:45.463496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6ed406c7-6b31-5121-9e07-a95f5a11b8c1'}})  2026-01-07 00:43:45.463508 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:45.463519 | orchestrator | 2026-01-07 00:43:45.463529 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-07 00:43:45.463540 | orchestrator | Wednesday 07 January 2026 00:43:41 +0000 (0:00:00.158) 0:00:11.506 ***** 2026-01-07 00:43:45.463551 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:43:45.463561 | orchestrator | 2026-01-07 00:43:45.463572 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-07 00:43:45.463583 | orchestrator | Wednesday 07 January 2026 00:43:41 +0000 (0:00:00.153) 0:00:11.660 ***** 2026-01-07 00:43:45.463594 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:43:45.463604 | orchestrator | 2026-01-07 00:43:45.463615 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-07 00:43:45.463625 | orchestrator | Wednesday 07 January 2026 00:43:41 +0000 (0:00:00.142) 0:00:11.802 ***** 2026-01-07 00:43:45.463636 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:45.463647 | orchestrator | 2026-01-07 00:43:45.463657 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-07 00:43:45.463668 | orchestrator | Wednesday 07 January 2026 00:43:41 +0000 (0:00:00.140) 0:00:11.943 ***** 2026-01-07 00:43:45.463688 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:45.463698 | orchestrator | 2026-01-07 00:43:45.463709 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-07 00:43:45.463720 | orchestrator | Wednesday 07 January 2026 00:43:42 +0000 (0:00:00.127) 0:00:12.070 ***** 2026-01-07 00:43:45.463731 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:45.463741 | orchestrator | 2026-01-07 00:43:45.463752 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-07 00:43:45.463763 | orchestrator | Wednesday 07 January 2026 00:43:42 +0000 (0:00:00.140) 0:00:12.210 ***** 2026-01-07 00:43:45.463774 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:43:45.463784 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:43:45.463795 | orchestrator |  "sdb": { 2026-01-07 00:43:45.463806 | orchestrator |  "osd_lvm_uuid": "29ea93ed-0a9a-5585-8fd4-59056229f60b" 2026-01-07 00:43:45.463817 | orchestrator |  }, 2026-01-07 00:43:45.463828 | orchestrator |  "sdc": { 2026-01-07 00:43:45.463839 | orchestrator |  "osd_lvm_uuid": "6ed406c7-6b31-5121-9e07-a95f5a11b8c1" 2026-01-07 00:43:45.463850 | orchestrator |  } 2026-01-07 00:43:45.463861 | orchestrator |  } 2026-01-07 00:43:45.463872 | orchestrator | } 2026-01-07 00:43:45.463883 | orchestrator | 2026-01-07 00:43:45.463893 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-07 00:43:45.463910 | orchestrator | Wednesday 07 January 2026 00:43:42 +0000 (0:00:00.146) 0:00:12.357 ***** 2026-01-07 00:43:45.463921 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:45.463932 | orchestrator | 2026-01-07 00:43:45.463943 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-07 00:43:45.463954 | orchestrator | Wednesday 07 January 2026 00:43:42 +0000 (0:00:00.127) 0:00:12.484 ***** 2026-01-07 00:43:45.463964 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:45.463975 | orchestrator | 2026-01-07 00:43:45.463986 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-07 00:43:45.463997 | orchestrator | Wednesday 07 January 2026 00:43:42 +0000 (0:00:00.128) 0:00:12.612 ***** 2026-01-07 00:43:45.464007 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:43:45.464018 | orchestrator | 2026-01-07 00:43:45.464029 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-07 00:43:45.464039 | orchestrator | Wednesday 07 January 2026 00:43:42 +0000 (0:00:00.125) 0:00:12.737 ***** 2026-01-07 00:43:45.464050 | orchestrator | changed: [testbed-node-3] => { 2026-01-07 00:43:45.464061 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-07 00:43:45.464072 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:43:45.464083 | orchestrator |  "sdb": { 2026-01-07 00:43:45.464093 | orchestrator |  "osd_lvm_uuid": "29ea93ed-0a9a-5585-8fd4-59056229f60b" 2026-01-07 00:43:45.464104 | orchestrator |  }, 2026-01-07 00:43:45.464115 | orchestrator |  "sdc": { 2026-01-07 00:43:45.464125 | orchestrator |  "osd_lvm_uuid": "6ed406c7-6b31-5121-9e07-a95f5a11b8c1" 2026-01-07 00:43:45.464136 | orchestrator |  } 2026-01-07 00:43:45.464168 | orchestrator |  }, 2026-01-07 00:43:45.464180 | orchestrator |  "lvm_volumes": [ 2026-01-07 00:43:45.464191 | orchestrator |  { 2026-01-07 00:43:45.464202 | orchestrator |  "data": "osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b", 2026-01-07 00:43:45.464213 | orchestrator |  "data_vg": "ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b" 2026-01-07 00:43:45.464223 | orchestrator |  }, 2026-01-07 00:43:45.464234 | orchestrator |  { 2026-01-07 00:43:45.464245 | orchestrator |  "data": "osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1", 2026-01-07 00:43:45.464256 | orchestrator |  "data_vg": "ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1" 2026-01-07 00:43:45.464267 | orchestrator |  } 2026-01-07 00:43:45.464278 | orchestrator |  ] 2026-01-07 00:43:45.464288 | orchestrator |  } 2026-01-07 00:43:45.464299 | orchestrator | } 2026-01-07 00:43:45.464318 | orchestrator | 2026-01-07 00:43:45.464329 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-07 00:43:45.464340 | orchestrator | Wednesday 07 January 2026 00:43:43 +0000 (0:00:00.418) 0:00:13.156 ***** 2026-01-07 00:43:45.464350 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-07 00:43:45.464361 | orchestrator | 2026-01-07 00:43:45.464372 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-07 00:43:45.464383 | orchestrator | 2026-01-07 00:43:45.464393 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:43:45.464404 | orchestrator | Wednesday 07 January 2026 00:43:44 +0000 (0:00:01.829) 0:00:14.986 ***** 2026-01-07 00:43:45.464415 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-07 00:43:45.464426 | orchestrator | 2026-01-07 00:43:45.464436 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:43:45.464447 | orchestrator | Wednesday 07 January 2026 00:43:45 +0000 (0:00:00.254) 0:00:15.240 ***** 2026-01-07 00:43:45.464458 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:43:45.464469 | orchestrator | 2026-01-07 00:43:45.464486 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:53.087480 | orchestrator | Wednesday 07 January 2026 00:43:45 +0000 (0:00:00.241) 0:00:15.482 ***** 2026-01-07 00:43:53.087623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-07 00:43:53.087641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-07 00:43:53.087653 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-07 00:43:53.087664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-07 00:43:53.087676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-07 00:43:53.087687 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-07 00:43:53.087698 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-07 00:43:53.087734 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-07 00:43:53.087746 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-07 00:43:53.087757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-07 00:43:53.087767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-07 00:43:53.087778 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-07 00:43:53.087795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-07 00:43:53.087806 | orchestrator | 2026-01-07 00:43:53.087820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:53.087831 | orchestrator | Wednesday 07 January 2026 00:43:45 +0000 (0:00:00.402) 0:00:15.884 ***** 2026-01-07 00:43:53.087842 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:53.087855 | orchestrator | 2026-01-07 00:43:53.087866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:53.087877 | orchestrator | Wednesday 07 January 2026 00:43:46 +0000 (0:00:00.230) 0:00:16.114 ***** 2026-01-07 00:43:53.087888 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:53.087899 | orchestrator | 2026-01-07 00:43:53.087910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:53.087921 | orchestrator | Wednesday 07 January 2026 00:43:46 +0000 (0:00:00.199) 0:00:16.314 ***** 2026-01-07 00:43:53.087932 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:53.087942 | orchestrator | 2026-01-07 00:43:53.087953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:53.087964 | orchestrator | Wednesday 07 January 2026 00:43:46 +0000 (0:00:00.188) 0:00:16.502 ***** 2026-01-07 00:43:53.088000 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:53.088014 | orchestrator | 2026-01-07 00:43:53.088027 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:53.088040 | orchestrator | Wednesday 07 January 2026 00:43:46 +0000 (0:00:00.189) 0:00:16.692 ***** 2026-01-07 00:43:53.088052 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:53.088065 | orchestrator | 2026-01-07 00:43:53.088077 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:53.088090 | orchestrator | Wednesday 07 January 2026 00:43:47 +0000 (0:00:00.578) 0:00:17.270 ***** 2026-01-07 00:43:53.088103 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:53.088115 | orchestrator | 2026-01-07 00:43:53.088128 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:53.088140 | orchestrator | Wednesday 07 January 2026 00:43:47 +0000 (0:00:00.196) 0:00:17.466 ***** 2026-01-07 00:43:53.088180 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:53.088192 | orchestrator | 2026-01-07 00:43:53.088205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:53.088217 | orchestrator | Wednesday 07 January 2026 00:43:47 +0000 (0:00:00.191) 0:00:17.659 ***** 2026-01-07 00:43:53.088230 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:53.088242 | orchestrator | 2026-01-07 00:43:53.088256 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:53.088268 | orchestrator | Wednesday 07 January 2026 00:43:47 +0000 (0:00:00.196) 0:00:17.855 ***** 2026-01-07 00:43:53.088281 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7) 2026-01-07 00:43:53.088296 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7) 2026-01-07 00:43:53.088308 | orchestrator | 2026-01-07 00:43:53.088321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:53.088334 | orchestrator | Wednesday 07 January 2026 00:43:48 +0000 (0:00:00.458) 0:00:18.313 ***** 2026-01-07 00:43:53.088347 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e8953730-7f10-4622-86b0-9bd54769baab) 2026-01-07 00:43:53.088360 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e8953730-7f10-4622-86b0-9bd54769baab) 2026-01-07 00:43:53.088373 | orchestrator | 2026-01-07 00:43:53.088383 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:53.088394 | orchestrator | Wednesday 07 January 2026 00:43:48 +0000 (0:00:00.516) 0:00:18.830 ***** 2026-01-07 00:43:53.088405 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2778d154-06c9-4d37-b4c8-396dcdd5fdf1) 2026-01-07 00:43:53.088416 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2778d154-06c9-4d37-b4c8-396dcdd5fdf1) 2026-01-07 00:43:53.088426 | orchestrator | 2026-01-07 00:43:53.088437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:53.088466 | orchestrator | Wednesday 07 January 2026 00:43:49 +0000 (0:00:00.433) 0:00:19.264 ***** 2026-01-07 00:43:53.088478 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f78b2b96-168b-421a-aa15-4bebe7f5a151) 2026-01-07 00:43:53.088489 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f78b2b96-168b-421a-aa15-4bebe7f5a151) 2026-01-07 00:43:53.088500 | orchestrator | 2026-01-07 00:43:53.088518 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:53.088529 | orchestrator | Wednesday 07 January 2026 00:43:49 +0000 (0:00:00.439) 0:00:19.703 ***** 2026-01-07 00:43:53.088540 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:43:53.088551 | orchestrator | 2026-01-07 00:43:53.088562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:53.088572 | orchestrator | Wednesday 07 January 2026 00:43:49 +0000 (0:00:00.310) 0:00:20.014 ***** 2026-01-07 00:43:53.088583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-07 00:43:53.088602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-07 00:43:53.088613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-07 00:43:53.088624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-07 00:43:53.088635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-07 00:43:53.088646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-07 00:43:53.088656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-07 00:43:53.088667 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-07 00:43:53.088678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-07 00:43:53.088688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-07 00:43:53.088699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-07 00:43:53.088710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-07 00:43:53.088721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-07 00:43:53.088732 | orchestrator | 2026-01-07 00:43:53.088742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:53.088753 | orchestrator | Wednesday 07 January 2026 00:43:50 +0000 (0:00:00.367) 0:00:20.381 ***** 2026-01-07 00:43:53.088764 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:53.088775 | orchestrator | 2026-01-07 00:43:53.088785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:53.088796 | orchestrator | Wednesday 07 January 2026 00:43:50 +0000 (0:00:00.619) 0:00:21.001 ***** 2026-01-07 00:43:53.088807 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:53.088818 | orchestrator | 2026-01-07 00:43:53.088828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:53.088839 | orchestrator | Wednesday 07 January 2026 00:43:51 +0000 (0:00:00.189) 0:00:21.191 ***** 2026-01-07 00:43:53.088849 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:53.088860 | orchestrator | 2026-01-07 00:43:53.088871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:53.088882 | orchestrator | Wednesday 07 January 2026 00:43:51 +0000 (0:00:00.188) 0:00:21.380 ***** 2026-01-07 00:43:53.088893 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:53.088903 | orchestrator | 2026-01-07 00:43:53.088915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:53.088925 | orchestrator | Wednesday 07 January 2026 00:43:51 +0000 (0:00:00.171) 0:00:21.551 ***** 2026-01-07 00:43:53.088936 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:53.088947 | orchestrator | 2026-01-07 00:43:53.088957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:53.088968 | orchestrator | Wednesday 07 January 2026 00:43:51 +0000 (0:00:00.162) 0:00:21.714 ***** 2026-01-07 00:43:53.088978 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:53.088989 | orchestrator | 2026-01-07 00:43:53.089000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:53.089010 | orchestrator | Wednesday 07 January 2026 00:43:51 +0000 (0:00:00.179) 0:00:21.894 ***** 2026-01-07 00:43:53.089021 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:53.089032 | orchestrator | 2026-01-07 00:43:53.089043 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:53.089053 | orchestrator | Wednesday 07 January 2026 00:43:52 +0000 (0:00:00.165) 0:00:22.059 ***** 2026-01-07 00:43:53.089064 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:53.089081 | orchestrator | 2026-01-07 00:43:53.089092 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:53.089103 | orchestrator | Wednesday 07 January 2026 00:43:52 +0000 (0:00:00.165) 0:00:22.225 ***** 2026-01-07 00:43:53.089113 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-07 00:43:53.089126 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-07 00:43:53.089137 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-07 00:43:53.089164 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-07 00:43:53.089175 | orchestrator | 2026-01-07 00:43:53.089186 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:53.089197 | orchestrator | Wednesday 07 January 2026 00:43:52 +0000 (0:00:00.688) 0:00:22.913 ***** 2026-01-07 00:43:53.089208 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:58.500520 | orchestrator | 2026-01-07 00:43:58.500640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:58.500651 | orchestrator | Wednesday 07 January 2026 00:43:53 +0000 (0:00:00.170) 0:00:23.084 ***** 2026-01-07 00:43:58.500659 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:58.500667 | orchestrator | 2026-01-07 00:43:58.500674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:58.500702 | orchestrator | Wednesday 07 January 2026 00:43:53 +0000 (0:00:00.167) 0:00:23.251 ***** 2026-01-07 00:43:58.500709 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:58.500716 | orchestrator | 2026-01-07 00:43:58.500722 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:43:58.500729 | orchestrator | Wednesday 07 January 2026 00:43:53 +0000 (0:00:00.169) 0:00:23.421 ***** 2026-01-07 00:43:58.500736 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:58.500742 | orchestrator | 2026-01-07 00:43:58.500749 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-07 00:43:58.500755 | orchestrator | Wednesday 07 January 2026 00:43:53 +0000 (0:00:00.518) 0:00:23.940 ***** 2026-01-07 00:43:58.500762 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-07 00:43:58.500769 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-07 00:43:58.500775 | orchestrator | 2026-01-07 00:43:58.500781 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-07 00:43:58.500788 | orchestrator | Wednesday 07 January 2026 00:43:54 +0000 (0:00:00.154) 0:00:24.095 ***** 2026-01-07 00:43:58.500794 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:58.500801 | orchestrator | 2026-01-07 00:43:58.500807 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-07 00:43:58.500814 | orchestrator | Wednesday 07 January 2026 00:43:54 +0000 (0:00:00.112) 0:00:24.207 ***** 2026-01-07 00:43:58.500820 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:58.500827 | orchestrator | 2026-01-07 00:43:58.500833 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-07 00:43:58.500840 | orchestrator | Wednesday 07 January 2026 00:43:54 +0000 (0:00:00.110) 0:00:24.317 ***** 2026-01-07 00:43:58.500846 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:58.500853 | orchestrator | 2026-01-07 00:43:58.500859 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-07 00:43:58.500866 | orchestrator | Wednesday 07 January 2026 00:43:54 +0000 (0:00:00.123) 0:00:24.441 ***** 2026-01-07 00:43:58.500872 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:43:58.500880 | orchestrator | 2026-01-07 00:43:58.500886 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-07 00:43:58.500893 | orchestrator | Wednesday 07 January 2026 00:43:54 +0000 (0:00:00.106) 0:00:24.547 ***** 2026-01-07 00:43:58.500900 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0b3967c5-6312-5066-b0c3-d93b1266106e'}}) 2026-01-07 00:43:58.500907 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'}}) 2026-01-07 00:43:58.500933 | orchestrator | 2026-01-07 00:43:58.500940 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-07 00:43:58.500946 | orchestrator | Wednesday 07 January 2026 00:43:54 +0000 (0:00:00.142) 0:00:24.690 ***** 2026-01-07 00:43:58.500954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0b3967c5-6312-5066-b0c3-d93b1266106e'}})  2026-01-07 00:43:58.500963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'}})  2026-01-07 00:43:58.500969 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:58.500975 | orchestrator | 2026-01-07 00:43:58.500982 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-07 00:43:58.500988 | orchestrator | Wednesday 07 January 2026 00:43:54 +0000 (0:00:00.125) 0:00:24.816 ***** 2026-01-07 00:43:58.500995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0b3967c5-6312-5066-b0c3-d93b1266106e'}})  2026-01-07 00:43:58.501001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'}})  2026-01-07 00:43:58.501007 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:58.501014 | orchestrator | 2026-01-07 00:43:58.501020 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-07 00:43:58.501027 | orchestrator | Wednesday 07 January 2026 00:43:54 +0000 (0:00:00.125) 0:00:24.941 ***** 2026-01-07 00:43:58.501033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0b3967c5-6312-5066-b0c3-d93b1266106e'}})  2026-01-07 00:43:58.501040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'}})  2026-01-07 00:43:58.501047 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:58.501053 | orchestrator | 2026-01-07 00:43:58.501059 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-07 00:43:58.501066 | orchestrator | Wednesday 07 January 2026 00:43:55 +0000 (0:00:00.136) 0:00:25.078 ***** 2026-01-07 00:43:58.501072 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:43:58.501079 | orchestrator | 2026-01-07 00:43:58.501085 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-07 00:43:58.501091 | orchestrator | Wednesday 07 January 2026 00:43:55 +0000 (0:00:00.118) 0:00:25.196 ***** 2026-01-07 00:43:58.501098 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:43:58.501104 | orchestrator | 2026-01-07 00:43:58.501110 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-07 00:43:58.501117 | orchestrator | Wednesday 07 January 2026 00:43:55 +0000 (0:00:00.115) 0:00:25.312 ***** 2026-01-07 00:43:58.501138 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:58.501159 | orchestrator | 2026-01-07 00:43:58.501166 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-07 00:43:58.501171 | orchestrator | Wednesday 07 January 2026 00:43:55 +0000 (0:00:00.242) 0:00:25.554 ***** 2026-01-07 00:43:58.501178 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:58.501184 | orchestrator | 2026-01-07 00:43:58.501191 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-07 00:43:58.501197 | orchestrator | Wednesday 07 January 2026 00:43:55 +0000 (0:00:00.115) 0:00:25.670 ***** 2026-01-07 00:43:58.501203 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:58.501210 | orchestrator | 2026-01-07 00:43:58.501216 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-07 00:43:58.501223 | orchestrator | Wednesday 07 January 2026 00:43:55 +0000 (0:00:00.114) 0:00:25.784 ***** 2026-01-07 00:43:58.501229 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:43:58.501236 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:43:58.501242 | orchestrator |  "sdb": { 2026-01-07 00:43:58.501249 | orchestrator |  "osd_lvm_uuid": "0b3967c5-6312-5066-b0c3-d93b1266106e" 2026-01-07 00:43:58.501256 | orchestrator |  }, 2026-01-07 00:43:58.501268 | orchestrator |  "sdc": { 2026-01-07 00:43:58.501279 | orchestrator |  "osd_lvm_uuid": "f1de19d5-0a66-5bfe-890b-5e52c2bc57c1" 2026-01-07 00:43:58.501285 | orchestrator |  } 2026-01-07 00:43:58.501292 | orchestrator |  } 2026-01-07 00:43:58.501299 | orchestrator | } 2026-01-07 00:43:58.501306 | orchestrator | 2026-01-07 00:43:58.501312 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-07 00:43:58.501319 | orchestrator | Wednesday 07 January 2026 00:43:55 +0000 (0:00:00.121) 0:00:25.906 ***** 2026-01-07 00:43:58.501325 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:58.501331 | orchestrator | 2026-01-07 00:43:58.501338 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-07 00:43:58.501344 | orchestrator | Wednesday 07 January 2026 00:43:55 +0000 (0:00:00.113) 0:00:26.020 ***** 2026-01-07 00:43:58.501350 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:58.501355 | orchestrator | 2026-01-07 00:43:58.501362 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-07 00:43:58.501367 | orchestrator | Wednesday 07 January 2026 00:43:56 +0000 (0:00:00.110) 0:00:26.130 ***** 2026-01-07 00:43:58.501374 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:43:58.501380 | orchestrator | 2026-01-07 00:43:58.501387 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-07 00:43:58.501393 | orchestrator | Wednesday 07 January 2026 00:43:56 +0000 (0:00:00.112) 0:00:26.243 ***** 2026-01-07 00:43:58.501399 | orchestrator | changed: [testbed-node-4] => { 2026-01-07 00:43:58.501406 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-07 00:43:58.501412 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:43:58.501419 | orchestrator |  "sdb": { 2026-01-07 00:43:58.501428 | orchestrator |  "osd_lvm_uuid": "0b3967c5-6312-5066-b0c3-d93b1266106e" 2026-01-07 00:43:58.501435 | orchestrator |  }, 2026-01-07 00:43:58.501441 | orchestrator |  "sdc": { 2026-01-07 00:43:58.501447 | orchestrator |  "osd_lvm_uuid": "f1de19d5-0a66-5bfe-890b-5e52c2bc57c1" 2026-01-07 00:43:58.501454 | orchestrator |  } 2026-01-07 00:43:58.501460 | orchestrator |  }, 2026-01-07 00:43:58.501466 | orchestrator |  "lvm_volumes": [ 2026-01-07 00:43:58.501473 | orchestrator |  { 2026-01-07 00:43:58.501479 | orchestrator |  "data": "osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e", 2026-01-07 00:43:58.501486 | orchestrator |  "data_vg": "ceph-0b3967c5-6312-5066-b0c3-d93b1266106e" 2026-01-07 00:43:58.501492 | orchestrator |  }, 2026-01-07 00:43:58.501499 | orchestrator |  { 2026-01-07 00:43:58.501505 | orchestrator |  "data": "osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1", 2026-01-07 00:43:58.501511 | orchestrator |  "data_vg": "ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1" 2026-01-07 00:43:58.501517 | orchestrator |  } 2026-01-07 00:43:58.501524 | orchestrator |  ] 2026-01-07 00:43:58.501530 | orchestrator |  } 2026-01-07 00:43:58.501536 | orchestrator | } 2026-01-07 00:43:58.501543 | orchestrator | 2026-01-07 00:43:58.501549 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-07 00:43:58.501556 | orchestrator | Wednesday 07 January 2026 00:43:56 +0000 (0:00:00.175) 0:00:26.419 ***** 2026-01-07 00:43:58.501562 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-07 00:43:58.501569 | orchestrator | 2026-01-07 00:43:58.501575 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-07 00:43:58.501581 | orchestrator | 2026-01-07 00:43:58.501587 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:43:58.501594 | orchestrator | Wednesday 07 January 2026 00:43:57 +0000 (0:00:00.953) 0:00:27.372 ***** 2026-01-07 00:43:58.501600 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-07 00:43:58.501607 | orchestrator | 2026-01-07 00:43:58.501613 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:43:58.501624 | orchestrator | Wednesday 07 January 2026 00:43:57 +0000 (0:00:00.546) 0:00:27.919 ***** 2026-01-07 00:43:58.501631 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:43:58.501637 | orchestrator | 2026-01-07 00:43:58.501643 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:43:58.501650 | orchestrator | Wednesday 07 January 2026 00:43:58 +0000 (0:00:00.216) 0:00:28.136 ***** 2026-01-07 00:43:58.501656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-07 00:43:58.501663 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-07 00:43:58.501669 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-07 00:43:58.501675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-07 00:43:58.501682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-07 00:43:58.501692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-07 00:44:05.838117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-07 00:44:05.838289 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-07 00:44:05.838304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-07 00:44:05.838315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-07 00:44:05.838324 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-07 00:44:05.838333 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-07 00:44:05.838342 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-07 00:44:05.838351 | orchestrator | 2026-01-07 00:44:05.838361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:05.838371 | orchestrator | Wednesday 07 January 2026 00:43:58 +0000 (0:00:00.384) 0:00:28.521 ***** 2026-01-07 00:44:05.838380 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.838390 | orchestrator | 2026-01-07 00:44:05.838398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:05.838407 | orchestrator | Wednesday 07 January 2026 00:43:58 +0000 (0:00:00.173) 0:00:28.694 ***** 2026-01-07 00:44:05.838415 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.838424 | orchestrator | 2026-01-07 00:44:05.838433 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:05.838442 | orchestrator | Wednesday 07 January 2026 00:43:58 +0000 (0:00:00.195) 0:00:28.889 ***** 2026-01-07 00:44:05.838450 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.838459 | orchestrator | 2026-01-07 00:44:05.838468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:05.838476 | orchestrator | Wednesday 07 January 2026 00:43:59 +0000 (0:00:00.201) 0:00:29.090 ***** 2026-01-07 00:44:05.838485 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.838494 | orchestrator | 2026-01-07 00:44:05.838502 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:05.838511 | orchestrator | Wednesday 07 January 2026 00:43:59 +0000 (0:00:00.194) 0:00:29.285 ***** 2026-01-07 00:44:05.838519 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.838528 | orchestrator | 2026-01-07 00:44:05.838537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:05.838546 | orchestrator | Wednesday 07 January 2026 00:43:59 +0000 (0:00:00.207) 0:00:29.492 ***** 2026-01-07 00:44:05.838556 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.838567 | orchestrator | 2026-01-07 00:44:05.838599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:05.838610 | orchestrator | Wednesday 07 January 2026 00:43:59 +0000 (0:00:00.179) 0:00:29.671 ***** 2026-01-07 00:44:05.838644 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.838655 | orchestrator | 2026-01-07 00:44:05.838666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:05.838676 | orchestrator | Wednesday 07 January 2026 00:43:59 +0000 (0:00:00.175) 0:00:29.847 ***** 2026-01-07 00:44:05.838686 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.838696 | orchestrator | 2026-01-07 00:44:05.838706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:05.838718 | orchestrator | Wednesday 07 January 2026 00:44:00 +0000 (0:00:00.189) 0:00:30.037 ***** 2026-01-07 00:44:05.838739 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015) 2026-01-07 00:44:05.838758 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015) 2026-01-07 00:44:05.838767 | orchestrator | 2026-01-07 00:44:05.838776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:05.838785 | orchestrator | Wednesday 07 January 2026 00:44:00 +0000 (0:00:00.655) 0:00:30.693 ***** 2026-01-07 00:44:05.838794 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6d387afb-e7b9-4a62-89e6-97c0cffa548c) 2026-01-07 00:44:05.838802 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6d387afb-e7b9-4a62-89e6-97c0cffa548c) 2026-01-07 00:44:05.838811 | orchestrator | 2026-01-07 00:44:05.838819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:05.838828 | orchestrator | Wednesday 07 January 2026 00:44:01 +0000 (0:00:00.451) 0:00:31.144 ***** 2026-01-07 00:44:05.838837 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_995dcd08-654d-4bc0-ab24-70981ba073f5) 2026-01-07 00:44:05.838845 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_995dcd08-654d-4bc0-ab24-70981ba073f5) 2026-01-07 00:44:05.838854 | orchestrator | 2026-01-07 00:44:05.838863 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:05.838871 | orchestrator | Wednesday 07 January 2026 00:44:01 +0000 (0:00:00.390) 0:00:31.535 ***** 2026-01-07 00:44:05.838880 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_82b3532f-8ed6-4997-a6d4-62047998b4b8) 2026-01-07 00:44:05.838888 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_82b3532f-8ed6-4997-a6d4-62047998b4b8) 2026-01-07 00:44:05.838897 | orchestrator | 2026-01-07 00:44:05.838905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:44:05.838914 | orchestrator | Wednesday 07 January 2026 00:44:01 +0000 (0:00:00.397) 0:00:31.932 ***** 2026-01-07 00:44:05.838923 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:44:05.838931 | orchestrator | 2026-01-07 00:44:05.838940 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:05.838965 | orchestrator | Wednesday 07 January 2026 00:44:02 +0000 (0:00:00.344) 0:00:32.277 ***** 2026-01-07 00:44:05.838975 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-07 00:44:05.838984 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-07 00:44:05.838992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-07 00:44:05.839001 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-07 00:44:05.839009 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-07 00:44:05.839018 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-07 00:44:05.839026 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-07 00:44:05.839036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-07 00:44:05.839056 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-07 00:44:05.839065 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-07 00:44:05.839073 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-07 00:44:05.839082 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-07 00:44:05.839090 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-07 00:44:05.839099 | orchestrator | 2026-01-07 00:44:05.839107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:05.839116 | orchestrator | Wednesday 07 January 2026 00:44:02 +0000 (0:00:00.350) 0:00:32.628 ***** 2026-01-07 00:44:05.839124 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.839176 | orchestrator | 2026-01-07 00:44:05.839187 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:05.839196 | orchestrator | Wednesday 07 January 2026 00:44:02 +0000 (0:00:00.219) 0:00:32.848 ***** 2026-01-07 00:44:05.839204 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.839213 | orchestrator | 2026-01-07 00:44:05.839222 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:05.839230 | orchestrator | Wednesday 07 January 2026 00:44:03 +0000 (0:00:00.210) 0:00:33.059 ***** 2026-01-07 00:44:05.839239 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.839248 | orchestrator | 2026-01-07 00:44:05.839256 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:05.839265 | orchestrator | Wednesday 07 January 2026 00:44:03 +0000 (0:00:00.216) 0:00:33.275 ***** 2026-01-07 00:44:05.839274 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.839282 | orchestrator | 2026-01-07 00:44:05.839291 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:05.839299 | orchestrator | Wednesday 07 January 2026 00:44:03 +0000 (0:00:00.213) 0:00:33.489 ***** 2026-01-07 00:44:05.839308 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.839316 | orchestrator | 2026-01-07 00:44:05.839325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:05.839334 | orchestrator | Wednesday 07 January 2026 00:44:03 +0000 (0:00:00.167) 0:00:33.656 ***** 2026-01-07 00:44:05.839342 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.839351 | orchestrator | 2026-01-07 00:44:05.839360 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:05.839368 | orchestrator | Wednesday 07 January 2026 00:44:04 +0000 (0:00:00.449) 0:00:34.105 ***** 2026-01-07 00:44:05.839377 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.839385 | orchestrator | 2026-01-07 00:44:05.839394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:05.839403 | orchestrator | Wednesday 07 January 2026 00:44:04 +0000 (0:00:00.206) 0:00:34.312 ***** 2026-01-07 00:44:05.839411 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.839420 | orchestrator | 2026-01-07 00:44:05.839428 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:05.839437 | orchestrator | Wednesday 07 January 2026 00:44:04 +0000 (0:00:00.201) 0:00:34.513 ***** 2026-01-07 00:44:05.839446 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-07 00:44:05.839455 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-07 00:44:05.839479 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-07 00:44:05.839489 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-07 00:44:05.839497 | orchestrator | 2026-01-07 00:44:05.839506 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:05.839515 | orchestrator | Wednesday 07 January 2026 00:44:05 +0000 (0:00:00.596) 0:00:35.110 ***** 2026-01-07 00:44:05.839523 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.839532 | orchestrator | 2026-01-07 00:44:05.839548 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:05.839563 | orchestrator | Wednesday 07 January 2026 00:44:05 +0000 (0:00:00.203) 0:00:35.313 ***** 2026-01-07 00:44:05.839573 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.839582 | orchestrator | 2026-01-07 00:44:05.839590 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:05.839599 | orchestrator | Wednesday 07 January 2026 00:44:05 +0000 (0:00:00.168) 0:00:35.481 ***** 2026-01-07 00:44:05.839607 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.839616 | orchestrator | 2026-01-07 00:44:05.839625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:44:05.839633 | orchestrator | Wednesday 07 January 2026 00:44:05 +0000 (0:00:00.180) 0:00:35.661 ***** 2026-01-07 00:44:05.839642 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:05.839651 | orchestrator | 2026-01-07 00:44:05.839665 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-07 00:44:10.255074 | orchestrator | Wednesday 07 January 2026 00:44:05 +0000 (0:00:00.197) 0:00:35.859 ***** 2026-01-07 00:44:10.255224 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-07 00:44:10.255241 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-07 00:44:10.255254 | orchestrator | 2026-01-07 00:44:10.255263 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-07 00:44:10.255270 | orchestrator | Wednesday 07 January 2026 00:44:06 +0000 (0:00:00.170) 0:00:36.029 ***** 2026-01-07 00:44:10.255277 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:10.255284 | orchestrator | 2026-01-07 00:44:10.255291 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-07 00:44:10.255297 | orchestrator | Wednesday 07 January 2026 00:44:06 +0000 (0:00:00.130) 0:00:36.160 ***** 2026-01-07 00:44:10.255303 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:10.255310 | orchestrator | 2026-01-07 00:44:10.255319 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-07 00:44:10.255329 | orchestrator | Wednesday 07 January 2026 00:44:06 +0000 (0:00:00.130) 0:00:36.290 ***** 2026-01-07 00:44:10.255338 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:10.255349 | orchestrator | 2026-01-07 00:44:10.255359 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-07 00:44:10.255369 | orchestrator | Wednesday 07 January 2026 00:44:06 +0000 (0:00:00.333) 0:00:36.624 ***** 2026-01-07 00:44:10.255379 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:44:10.255391 | orchestrator | 2026-01-07 00:44:10.255402 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-07 00:44:10.255414 | orchestrator | Wednesday 07 January 2026 00:44:06 +0000 (0:00:00.155) 0:00:36.780 ***** 2026-01-07 00:44:10.255425 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dee3f89e-6ecc-57ac-a128-7ff5a8885640'}}) 2026-01-07 00:44:10.255438 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1079410-ca98-5ed2-be64-415d52b0d3f8'}}) 2026-01-07 00:44:10.255449 | orchestrator | 2026-01-07 00:44:10.255459 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-07 00:44:10.255471 | orchestrator | Wednesday 07 January 2026 00:44:06 +0000 (0:00:00.195) 0:00:36.975 ***** 2026-01-07 00:44:10.255483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dee3f89e-6ecc-57ac-a128-7ff5a8885640'}})  2026-01-07 00:44:10.255519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1079410-ca98-5ed2-be64-415d52b0d3f8'}})  2026-01-07 00:44:10.255534 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:10.255545 | orchestrator | 2026-01-07 00:44:10.255557 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-07 00:44:10.255570 | orchestrator | Wednesday 07 January 2026 00:44:07 +0000 (0:00:00.202) 0:00:37.178 ***** 2026-01-07 00:44:10.255587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dee3f89e-6ecc-57ac-a128-7ff5a8885640'}})  2026-01-07 00:44:10.255631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1079410-ca98-5ed2-be64-415d52b0d3f8'}})  2026-01-07 00:44:10.255652 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:10.255670 | orchestrator | 2026-01-07 00:44:10.255684 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-07 00:44:10.255703 | orchestrator | Wednesday 07 January 2026 00:44:07 +0000 (0:00:00.305) 0:00:37.483 ***** 2026-01-07 00:44:10.255723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dee3f89e-6ecc-57ac-a128-7ff5a8885640'}})  2026-01-07 00:44:10.255738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1079410-ca98-5ed2-be64-415d52b0d3f8'}})  2026-01-07 00:44:10.255749 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:10.255759 | orchestrator | 2026-01-07 00:44:10.255770 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-07 00:44:10.255781 | orchestrator | Wednesday 07 January 2026 00:44:07 +0000 (0:00:00.217) 0:00:37.701 ***** 2026-01-07 00:44:10.255792 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:44:10.255802 | orchestrator | 2026-01-07 00:44:10.255812 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-07 00:44:10.255830 | orchestrator | Wednesday 07 January 2026 00:44:07 +0000 (0:00:00.208) 0:00:37.909 ***** 2026-01-07 00:44:10.255849 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:44:10.255865 | orchestrator | 2026-01-07 00:44:10.255876 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-07 00:44:10.255886 | orchestrator | Wednesday 07 January 2026 00:44:08 +0000 (0:00:00.153) 0:00:38.062 ***** 2026-01-07 00:44:10.255896 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:10.255907 | orchestrator | 2026-01-07 00:44:10.255919 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-07 00:44:10.255930 | orchestrator | Wednesday 07 January 2026 00:44:08 +0000 (0:00:00.144) 0:00:38.207 ***** 2026-01-07 00:44:10.255940 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:10.255952 | orchestrator | 2026-01-07 00:44:10.255963 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-07 00:44:10.255975 | orchestrator | Wednesday 07 January 2026 00:44:08 +0000 (0:00:00.143) 0:00:38.351 ***** 2026-01-07 00:44:10.255986 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:10.255997 | orchestrator | 2026-01-07 00:44:10.256009 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-07 00:44:10.256020 | orchestrator | Wednesday 07 January 2026 00:44:08 +0000 (0:00:00.140) 0:00:38.492 ***** 2026-01-07 00:44:10.256044 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:44:10.256057 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:44:10.256070 | orchestrator |  "sdb": { 2026-01-07 00:44:10.256104 | orchestrator |  "osd_lvm_uuid": "dee3f89e-6ecc-57ac-a128-7ff5a8885640" 2026-01-07 00:44:10.256118 | orchestrator |  }, 2026-01-07 00:44:10.256157 | orchestrator |  "sdc": { 2026-01-07 00:44:10.256169 | orchestrator |  "osd_lvm_uuid": "c1079410-ca98-5ed2-be64-415d52b0d3f8" 2026-01-07 00:44:10.256178 | orchestrator |  } 2026-01-07 00:44:10.256189 | orchestrator |  } 2026-01-07 00:44:10.256200 | orchestrator | } 2026-01-07 00:44:10.256211 | orchestrator | 2026-01-07 00:44:10.256221 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-07 00:44:10.256231 | orchestrator | Wednesday 07 January 2026 00:44:08 +0000 (0:00:00.127) 0:00:38.619 ***** 2026-01-07 00:44:10.256242 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:10.256252 | orchestrator | 2026-01-07 00:44:10.256263 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-07 00:44:10.256273 | orchestrator | Wednesday 07 January 2026 00:44:08 +0000 (0:00:00.298) 0:00:38.918 ***** 2026-01-07 00:44:10.256283 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:10.256304 | orchestrator | 2026-01-07 00:44:10.256314 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-07 00:44:10.256324 | orchestrator | Wednesday 07 January 2026 00:44:08 +0000 (0:00:00.106) 0:00:39.024 ***** 2026-01-07 00:44:10.256335 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:44:10.256345 | orchestrator | 2026-01-07 00:44:10.256356 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-07 00:44:10.256364 | orchestrator | Wednesday 07 January 2026 00:44:09 +0000 (0:00:00.111) 0:00:39.135 ***** 2026-01-07 00:44:10.256370 | orchestrator | changed: [testbed-node-5] => { 2026-01-07 00:44:10.256376 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-07 00:44:10.256382 | orchestrator |  "ceph_osd_devices": { 2026-01-07 00:44:10.256390 | orchestrator |  "sdb": { 2026-01-07 00:44:10.256400 | orchestrator |  "osd_lvm_uuid": "dee3f89e-6ecc-57ac-a128-7ff5a8885640" 2026-01-07 00:44:10.256410 | orchestrator |  }, 2026-01-07 00:44:10.256420 | orchestrator |  "sdc": { 2026-01-07 00:44:10.256430 | orchestrator |  "osd_lvm_uuid": "c1079410-ca98-5ed2-be64-415d52b0d3f8" 2026-01-07 00:44:10.256440 | orchestrator |  } 2026-01-07 00:44:10.256450 | orchestrator |  }, 2026-01-07 00:44:10.256465 | orchestrator |  "lvm_volumes": [ 2026-01-07 00:44:10.256475 | orchestrator |  { 2026-01-07 00:44:10.256486 | orchestrator |  "data": "osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640", 2026-01-07 00:44:10.256495 | orchestrator |  "data_vg": "ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640" 2026-01-07 00:44:10.256504 | orchestrator |  }, 2026-01-07 00:44:10.256513 | orchestrator |  { 2026-01-07 00:44:10.256521 | orchestrator |  "data": "osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8", 2026-01-07 00:44:10.256541 | orchestrator |  "data_vg": "ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8" 2026-01-07 00:44:10.256552 | orchestrator |  } 2026-01-07 00:44:10.256560 | orchestrator |  ] 2026-01-07 00:44:10.256574 | orchestrator |  } 2026-01-07 00:44:10.256583 | orchestrator | } 2026-01-07 00:44:10.256592 | orchestrator | 2026-01-07 00:44:10.256601 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-07 00:44:10.256610 | orchestrator | Wednesday 07 January 2026 00:44:09 +0000 (0:00:00.199) 0:00:39.335 ***** 2026-01-07 00:44:10.256619 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-07 00:44:10.256628 | orchestrator | 2026-01-07 00:44:10.256637 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:44:10.256646 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 00:44:10.256657 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 00:44:10.256665 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 00:44:10.256674 | orchestrator | 2026-01-07 00:44:10.256682 | orchestrator | 2026-01-07 00:44:10.256692 | orchestrator | 2026-01-07 00:44:10.256701 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:44:10.256710 | orchestrator | Wednesday 07 January 2026 00:44:10 +0000 (0:00:00.925) 0:00:40.261 ***** 2026-01-07 00:44:10.256719 | orchestrator | =============================================================================== 2026-01-07 00:44:10.256728 | orchestrator | Write configuration file ------------------------------------------------ 3.71s 2026-01-07 00:44:10.256737 | orchestrator | Add known links to the list of available block devices ------------------ 1.29s 2026-01-07 00:44:10.256746 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2026-01-07 00:44:10.256755 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.06s 2026-01-07 00:44:10.256771 | orchestrator | Add known partitions to the list of available block devices ------------- 0.98s 2026-01-07 00:44:10.256780 | orchestrator | Add known links to the list of available block devices ------------------ 0.98s 2026-01-07 00:44:10.256789 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.86s 2026-01-07 00:44:10.256797 | orchestrator | Print configuration data ------------------------------------------------ 0.79s 2026-01-07 00:44:10.256806 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-01-07 00:44:10.256814 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-01-07 00:44:10.256823 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-01-07 00:44:10.256831 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2026-01-07 00:44:10.256840 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-01-07 00:44:10.256858 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2026-01-07 00:44:10.460304 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.61s 2026-01-07 00:44:10.460405 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2026-01-07 00:44:10.460411 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2026-01-07 00:44:10.460416 | orchestrator | Print WAL devices ------------------------------------------------------- 0.54s 2026-01-07 00:44:10.460421 | orchestrator | Set DB devices config data ---------------------------------------------- 0.53s 2026-01-07 00:44:10.460426 | orchestrator | Add known partitions to the list of available block devices ------------- 0.52s 2026-01-07 00:44:32.908244 | orchestrator | 2026-01-07 00:44:32 | INFO  | Task f5483de3-4d32-4b13-9cdb-e5c8be349d12 (sync inventory) is running in background. Output coming soon. 2026-01-07 00:44:58.220462 | orchestrator | 2026-01-07 00:44:34 | INFO  | Starting group_vars file reorganization 2026-01-07 00:44:58.220581 | orchestrator | 2026-01-07 00:44:34 | INFO  | Moved 0 file(s) to their respective directories 2026-01-07 00:44:58.220599 | orchestrator | 2026-01-07 00:44:34 | INFO  | Group_vars file reorganization completed 2026-01-07 00:44:58.220621 | orchestrator | 2026-01-07 00:44:37 | INFO  | Starting variable preparation from inventory 2026-01-07 00:44:58.221396 | orchestrator | 2026-01-07 00:44:39 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-07 00:44:58.221450 | orchestrator | 2026-01-07 00:44:39 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-07 00:44:58.221458 | orchestrator | 2026-01-07 00:44:39 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-07 00:44:58.221464 | orchestrator | 2026-01-07 00:44:39 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-07 00:44:58.221471 | orchestrator | 2026-01-07 00:44:39 | INFO  | Variable preparation completed 2026-01-07 00:44:58.221477 | orchestrator | 2026-01-07 00:44:41 | INFO  | Starting inventory overwrite handling 2026-01-07 00:44:58.221483 | orchestrator | 2026-01-07 00:44:41 | INFO  | Handling group overwrites in 99-overwrite 2026-01-07 00:44:58.221489 | orchestrator | 2026-01-07 00:44:41 | INFO  | Removing group frr:children from 60-generic 2026-01-07 00:44:58.221495 | orchestrator | 2026-01-07 00:44:41 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-07 00:44:58.221503 | orchestrator | 2026-01-07 00:44:41 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-07 00:44:58.221512 | orchestrator | 2026-01-07 00:44:41 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-07 00:44:58.221520 | orchestrator | 2026-01-07 00:44:41 | INFO  | Handling group overwrites in 20-roles 2026-01-07 00:44:58.221528 | orchestrator | 2026-01-07 00:44:41 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-07 00:44:58.221564 | orchestrator | 2026-01-07 00:44:41 | INFO  | Removed 5 group(s) in total 2026-01-07 00:44:58.221575 | orchestrator | 2026-01-07 00:44:41 | INFO  | Inventory overwrite handling completed 2026-01-07 00:44:58.221598 | orchestrator | 2026-01-07 00:44:42 | INFO  | Starting merge of inventory files 2026-01-07 00:44:58.221604 | orchestrator | 2026-01-07 00:44:42 | INFO  | Inventory files merged successfully 2026-01-07 00:44:58.221609 | orchestrator | 2026-01-07 00:44:47 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-07 00:44:58.221614 | orchestrator | 2026-01-07 00:44:56 | INFO  | Successfully wrote ClusterShell configuration 2026-01-07 00:44:58.221620 | orchestrator | [master f19b13c] 2026-01-07-00-44 2026-01-07 00:44:58.221627 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-07 00:45:00.164699 | orchestrator | 2026-01-07 00:45:00 | INFO  | Task f9d0fe27-5bdd-457a-8843-51d14b83ca37 (ceph-create-lvm-devices) was prepared for execution. 2026-01-07 00:45:00.164804 | orchestrator | 2026-01-07 00:45:00 | INFO  | It takes a moment until task f9d0fe27-5bdd-457a-8843-51d14b83ca37 (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-07 00:45:10.418409 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-07 00:45:10.418518 | orchestrator | 2.16.14 2026-01-07 00:45:10.418532 | orchestrator | 2026-01-07 00:45:10.418543 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-07 00:45:10.418553 | orchestrator | 2026-01-07 00:45:10.418562 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:45:10.418572 | orchestrator | Wednesday 07 January 2026 00:45:04 +0000 (0:00:00.230) 0:00:00.230 ***** 2026-01-07 00:45:10.418581 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-07 00:45:10.418590 | orchestrator | 2026-01-07 00:45:10.418599 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:45:10.418608 | orchestrator | Wednesday 07 January 2026 00:45:04 +0000 (0:00:00.205) 0:00:00.435 ***** 2026-01-07 00:45:10.418617 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:45:10.418626 | orchestrator | 2026-01-07 00:45:10.418635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:10.418645 | orchestrator | Wednesday 07 January 2026 00:45:04 +0000 (0:00:00.165) 0:00:00.601 ***** 2026-01-07 00:45:10.418654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-07 00:45:10.418663 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-07 00:45:10.418671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-07 00:45:10.418680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-07 00:45:10.418689 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-07 00:45:10.418697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-07 00:45:10.418706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-07 00:45:10.418714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-07 00:45:10.418723 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-07 00:45:10.418748 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-07 00:45:10.418758 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-07 00:45:10.418766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-07 00:45:10.418775 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-07 00:45:10.418805 | orchestrator | 2026-01-07 00:45:10.418815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:10.418823 | orchestrator | Wednesday 07 January 2026 00:45:04 +0000 (0:00:00.450) 0:00:01.052 ***** 2026-01-07 00:45:10.418832 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:10.418841 | orchestrator | 2026-01-07 00:45:10.418850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:10.418858 | orchestrator | Wednesday 07 January 2026 00:45:05 +0000 (0:00:00.173) 0:00:01.225 ***** 2026-01-07 00:45:10.418867 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:10.418875 | orchestrator | 2026-01-07 00:45:10.418884 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:10.418897 | orchestrator | Wednesday 07 January 2026 00:45:05 +0000 (0:00:00.165) 0:00:01.391 ***** 2026-01-07 00:45:10.418906 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:10.418915 | orchestrator | 2026-01-07 00:45:10.418926 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:10.418936 | orchestrator | Wednesday 07 January 2026 00:45:05 +0000 (0:00:00.154) 0:00:01.546 ***** 2026-01-07 00:45:10.418946 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:10.418956 | orchestrator | 2026-01-07 00:45:10.418967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:10.418976 | orchestrator | Wednesday 07 January 2026 00:45:05 +0000 (0:00:00.160) 0:00:01.706 ***** 2026-01-07 00:45:10.418986 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:10.418997 | orchestrator | 2026-01-07 00:45:10.419007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:10.419017 | orchestrator | Wednesday 07 January 2026 00:45:05 +0000 (0:00:00.159) 0:00:01.865 ***** 2026-01-07 00:45:10.419027 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:10.419037 | orchestrator | 2026-01-07 00:45:10.419047 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:10.419056 | orchestrator | Wednesday 07 January 2026 00:45:05 +0000 (0:00:00.160) 0:00:02.026 ***** 2026-01-07 00:45:10.419066 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:10.419076 | orchestrator | 2026-01-07 00:45:10.419086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:10.419122 | orchestrator | Wednesday 07 January 2026 00:45:06 +0000 (0:00:00.180) 0:00:02.206 ***** 2026-01-07 00:45:10.419132 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:10.419142 | orchestrator | 2026-01-07 00:45:10.419152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:10.419162 | orchestrator | Wednesday 07 January 2026 00:45:06 +0000 (0:00:00.195) 0:00:02.401 ***** 2026-01-07 00:45:10.419173 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67) 2026-01-07 00:45:10.419185 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67) 2026-01-07 00:45:10.419195 | orchestrator | 2026-01-07 00:45:10.419206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:10.419233 | orchestrator | Wednesday 07 January 2026 00:45:06 +0000 (0:00:00.417) 0:00:02.819 ***** 2026-01-07 00:45:10.419242 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0dd21d7e-182d-4e2a-b2dc-5d8af31fa2ef) 2026-01-07 00:45:10.419251 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0dd21d7e-182d-4e2a-b2dc-5d8af31fa2ef) 2026-01-07 00:45:10.419260 | orchestrator | 2026-01-07 00:45:10.419269 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:10.419277 | orchestrator | Wednesday 07 January 2026 00:45:07 +0000 (0:00:00.563) 0:00:03.382 ***** 2026-01-07 00:45:10.419286 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c52f0d9f-ed72-456f-8893-789cce9c22ff) 2026-01-07 00:45:10.419294 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c52f0d9f-ed72-456f-8893-789cce9c22ff) 2026-01-07 00:45:10.419310 | orchestrator | 2026-01-07 00:45:10.419319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:10.419328 | orchestrator | Wednesday 07 January 2026 00:45:07 +0000 (0:00:00.523) 0:00:03.905 ***** 2026-01-07 00:45:10.419336 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_17558d9b-0f92-44fa-9888-3d1d3136e2b9) 2026-01-07 00:45:10.419345 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_17558d9b-0f92-44fa-9888-3d1d3136e2b9) 2026-01-07 00:45:10.419354 | orchestrator | 2026-01-07 00:45:10.419363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:10.419371 | orchestrator | Wednesday 07 January 2026 00:45:08 +0000 (0:00:00.668) 0:00:04.574 ***** 2026-01-07 00:45:10.419380 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:45:10.419388 | orchestrator | 2026-01-07 00:45:10.419397 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:10.419406 | orchestrator | Wednesday 07 January 2026 00:45:08 +0000 (0:00:00.299) 0:00:04.874 ***** 2026-01-07 00:45:10.419424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-07 00:45:10.419433 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-07 00:45:10.419442 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-07 00:45:10.419450 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-07 00:45:10.419459 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-07 00:45:10.419467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-07 00:45:10.419476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-07 00:45:10.419484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-07 00:45:10.419493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-07 00:45:10.419501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-07 00:45:10.419510 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-07 00:45:10.419518 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-07 00:45:10.419527 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-07 00:45:10.419536 | orchestrator | 2026-01-07 00:45:10.419544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:10.419553 | orchestrator | Wednesday 07 January 2026 00:45:09 +0000 (0:00:00.381) 0:00:05.256 ***** 2026-01-07 00:45:10.419561 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:10.419570 | orchestrator | 2026-01-07 00:45:10.419578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:10.419587 | orchestrator | Wednesday 07 January 2026 00:45:09 +0000 (0:00:00.174) 0:00:05.431 ***** 2026-01-07 00:45:10.419596 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:10.419604 | orchestrator | 2026-01-07 00:45:10.419613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:10.419622 | orchestrator | Wednesday 07 January 2026 00:45:09 +0000 (0:00:00.183) 0:00:05.614 ***** 2026-01-07 00:45:10.419630 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:10.419639 | orchestrator | 2026-01-07 00:45:10.419647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:10.419656 | orchestrator | Wednesday 07 January 2026 00:45:09 +0000 (0:00:00.193) 0:00:05.808 ***** 2026-01-07 00:45:10.419665 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:10.419679 | orchestrator | 2026-01-07 00:45:10.419688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:10.419697 | orchestrator | Wednesday 07 January 2026 00:45:09 +0000 (0:00:00.191) 0:00:06.000 ***** 2026-01-07 00:45:10.419705 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:10.419714 | orchestrator | 2026-01-07 00:45:10.419723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:10.419731 | orchestrator | Wednesday 07 January 2026 00:45:10 +0000 (0:00:00.193) 0:00:06.193 ***** 2026-01-07 00:45:10.419740 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:10.419748 | orchestrator | 2026-01-07 00:45:10.419757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:10.419765 | orchestrator | Wednesday 07 January 2026 00:45:10 +0000 (0:00:00.177) 0:00:06.371 ***** 2026-01-07 00:45:10.419774 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:10.419783 | orchestrator | 2026-01-07 00:45:10.419796 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:18.158197 | orchestrator | Wednesday 07 January 2026 00:45:10 +0000 (0:00:00.180) 0:00:06.551 ***** 2026-01-07 00:45:18.158282 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158289 | orchestrator | 2026-01-07 00:45:18.158294 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:18.158299 | orchestrator | Wednesday 07 January 2026 00:45:10 +0000 (0:00:00.183) 0:00:06.735 ***** 2026-01-07 00:45:18.158303 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-07 00:45:18.158308 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-07 00:45:18.158313 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-07 00:45:18.158317 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-07 00:45:18.158320 | orchestrator | 2026-01-07 00:45:18.158325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:18.158329 | orchestrator | Wednesday 07 January 2026 00:45:11 +0000 (0:00:00.950) 0:00:07.685 ***** 2026-01-07 00:45:18.158333 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158336 | orchestrator | 2026-01-07 00:45:18.158340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:18.158344 | orchestrator | Wednesday 07 January 2026 00:45:11 +0000 (0:00:00.197) 0:00:07.883 ***** 2026-01-07 00:45:18.158348 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158352 | orchestrator | 2026-01-07 00:45:18.158355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:18.158359 | orchestrator | Wednesday 07 January 2026 00:45:11 +0000 (0:00:00.185) 0:00:08.068 ***** 2026-01-07 00:45:18.158363 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158367 | orchestrator | 2026-01-07 00:45:18.158371 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:18.158375 | orchestrator | Wednesday 07 January 2026 00:45:12 +0000 (0:00:00.190) 0:00:08.258 ***** 2026-01-07 00:45:18.158379 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158382 | orchestrator | 2026-01-07 00:45:18.158386 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-07 00:45:18.158390 | orchestrator | Wednesday 07 January 2026 00:45:12 +0000 (0:00:00.179) 0:00:08.438 ***** 2026-01-07 00:45:18.158394 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158397 | orchestrator | 2026-01-07 00:45:18.158401 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-07 00:45:18.158405 | orchestrator | Wednesday 07 January 2026 00:45:12 +0000 (0:00:00.132) 0:00:08.570 ***** 2026-01-07 00:45:18.158421 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '29ea93ed-0a9a-5585-8fd4-59056229f60b'}}) 2026-01-07 00:45:18.158426 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6ed406c7-6b31-5121-9e07-a95f5a11b8c1'}}) 2026-01-07 00:45:18.158430 | orchestrator | 2026-01-07 00:45:18.158434 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-07 00:45:18.158451 | orchestrator | Wednesday 07 January 2026 00:45:12 +0000 (0:00:00.178) 0:00:08.749 ***** 2026-01-07 00:45:18.158456 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'}) 2026-01-07 00:45:18.158462 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'}) 2026-01-07 00:45:18.158465 | orchestrator | 2026-01-07 00:45:18.158469 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-07 00:45:18.158476 | orchestrator | Wednesday 07 January 2026 00:45:14 +0000 (0:00:01.934) 0:00:10.683 ***** 2026-01-07 00:45:18.158480 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:18.158485 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:18.158489 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158493 | orchestrator | 2026-01-07 00:45:18.158496 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-07 00:45:18.158500 | orchestrator | Wednesday 07 January 2026 00:45:14 +0000 (0:00:00.139) 0:00:10.823 ***** 2026-01-07 00:45:18.158504 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'}) 2026-01-07 00:45:18.158508 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'}) 2026-01-07 00:45:18.158511 | orchestrator | 2026-01-07 00:45:18.158515 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-07 00:45:18.158520 | orchestrator | Wednesday 07 January 2026 00:45:16 +0000 (0:00:01.450) 0:00:12.274 ***** 2026-01-07 00:45:18.158523 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:18.158527 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:18.158531 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158535 | orchestrator | 2026-01-07 00:45:18.158538 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-07 00:45:18.158542 | orchestrator | Wednesday 07 January 2026 00:45:16 +0000 (0:00:00.166) 0:00:12.441 ***** 2026-01-07 00:45:18.158576 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158580 | orchestrator | 2026-01-07 00:45:18.158584 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-07 00:45:18.158588 | orchestrator | Wednesday 07 January 2026 00:45:16 +0000 (0:00:00.141) 0:00:12.582 ***** 2026-01-07 00:45:18.158592 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:18.158596 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:18.158599 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158603 | orchestrator | 2026-01-07 00:45:18.158607 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-07 00:45:18.158611 | orchestrator | Wednesday 07 January 2026 00:45:16 +0000 (0:00:00.347) 0:00:12.929 ***** 2026-01-07 00:45:18.158614 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158618 | orchestrator | 2026-01-07 00:45:18.158622 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-07 00:45:18.158625 | orchestrator | Wednesday 07 January 2026 00:45:16 +0000 (0:00:00.153) 0:00:13.083 ***** 2026-01-07 00:45:18.158633 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:18.158637 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:18.158641 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158644 | orchestrator | 2026-01-07 00:45:18.158648 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-07 00:45:18.158652 | orchestrator | Wednesday 07 January 2026 00:45:17 +0000 (0:00:00.143) 0:00:13.226 ***** 2026-01-07 00:45:18.158656 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158659 | orchestrator | 2026-01-07 00:45:18.158663 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-07 00:45:18.158667 | orchestrator | Wednesday 07 January 2026 00:45:17 +0000 (0:00:00.142) 0:00:13.368 ***** 2026-01-07 00:45:18.158670 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:18.158674 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:18.158678 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158682 | orchestrator | 2026-01-07 00:45:18.158685 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-07 00:45:18.158689 | orchestrator | Wednesday 07 January 2026 00:45:17 +0000 (0:00:00.160) 0:00:13.528 ***** 2026-01-07 00:45:18.158693 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:45:18.158697 | orchestrator | 2026-01-07 00:45:18.158701 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-07 00:45:18.158705 | orchestrator | Wednesday 07 January 2026 00:45:17 +0000 (0:00:00.154) 0:00:13.683 ***** 2026-01-07 00:45:18.158713 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:18.158718 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:18.158722 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158726 | orchestrator | 2026-01-07 00:45:18.158731 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-07 00:45:18.158735 | orchestrator | Wednesday 07 January 2026 00:45:17 +0000 (0:00:00.155) 0:00:13.838 ***** 2026-01-07 00:45:18.158740 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:18.158744 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:18.158749 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158753 | orchestrator | 2026-01-07 00:45:18.158757 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-07 00:45:18.158762 | orchestrator | Wednesday 07 January 2026 00:45:17 +0000 (0:00:00.149) 0:00:13.988 ***** 2026-01-07 00:45:18.158766 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:18.158771 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:18.158775 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158779 | orchestrator | 2026-01-07 00:45:18.158783 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-07 00:45:18.158788 | orchestrator | Wednesday 07 January 2026 00:45:18 +0000 (0:00:00.157) 0:00:14.145 ***** 2026-01-07 00:45:18.158795 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:18.158800 | orchestrator | 2026-01-07 00:45:18.158804 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-07 00:45:18.158812 | orchestrator | Wednesday 07 January 2026 00:45:18 +0000 (0:00:00.146) 0:00:14.292 ***** 2026-01-07 00:45:25.487998 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488067 | orchestrator | 2026-01-07 00:45:25.488075 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-07 00:45:25.488126 | orchestrator | Wednesday 07 January 2026 00:45:18 +0000 (0:00:00.139) 0:00:14.431 ***** 2026-01-07 00:45:25.488133 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488139 | orchestrator | 2026-01-07 00:45:25.488153 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-07 00:45:25.488160 | orchestrator | Wednesday 07 January 2026 00:45:18 +0000 (0:00:00.183) 0:00:14.615 ***** 2026-01-07 00:45:25.488175 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:45:25.488179 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-07 00:45:25.488183 | orchestrator | } 2026-01-07 00:45:25.488191 | orchestrator | 2026-01-07 00:45:25.488197 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-07 00:45:25.488204 | orchestrator | Wednesday 07 January 2026 00:45:19 +0000 (0:00:00.668) 0:00:15.284 ***** 2026-01-07 00:45:25.488211 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:45:25.488218 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-07 00:45:25.488224 | orchestrator | } 2026-01-07 00:45:25.488230 | orchestrator | 2026-01-07 00:45:25.488236 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-07 00:45:25.488243 | orchestrator | Wednesday 07 January 2026 00:45:19 +0000 (0:00:00.168) 0:00:15.452 ***** 2026-01-07 00:45:25.488249 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:45:25.488256 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-07 00:45:25.488264 | orchestrator | } 2026-01-07 00:45:25.488271 | orchestrator | 2026-01-07 00:45:25.488277 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-07 00:45:25.488284 | orchestrator | Wednesday 07 January 2026 00:45:19 +0000 (0:00:00.159) 0:00:15.611 ***** 2026-01-07 00:45:25.488290 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:45:25.488294 | orchestrator | 2026-01-07 00:45:25.488298 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-07 00:45:25.488302 | orchestrator | Wednesday 07 January 2026 00:45:20 +0000 (0:00:00.744) 0:00:16.356 ***** 2026-01-07 00:45:25.488305 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:45:25.488309 | orchestrator | 2026-01-07 00:45:25.488313 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-07 00:45:25.488317 | orchestrator | Wednesday 07 January 2026 00:45:20 +0000 (0:00:00.619) 0:00:16.975 ***** 2026-01-07 00:45:25.488321 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:45:25.488325 | orchestrator | 2026-01-07 00:45:25.488329 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-07 00:45:25.488333 | orchestrator | Wednesday 07 January 2026 00:45:21 +0000 (0:00:00.653) 0:00:17.629 ***** 2026-01-07 00:45:25.488337 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:45:25.488340 | orchestrator | 2026-01-07 00:45:25.488344 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-07 00:45:25.488348 | orchestrator | Wednesday 07 January 2026 00:45:21 +0000 (0:00:00.165) 0:00:17.794 ***** 2026-01-07 00:45:25.488352 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488356 | orchestrator | 2026-01-07 00:45:25.488359 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-07 00:45:25.488363 | orchestrator | Wednesday 07 January 2026 00:45:21 +0000 (0:00:00.124) 0:00:17.918 ***** 2026-01-07 00:45:25.488367 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488371 | orchestrator | 2026-01-07 00:45:25.488375 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-07 00:45:25.488389 | orchestrator | Wednesday 07 January 2026 00:45:21 +0000 (0:00:00.121) 0:00:18.040 ***** 2026-01-07 00:45:25.488394 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:45:25.488397 | orchestrator |  "vgs_report": { 2026-01-07 00:45:25.488401 | orchestrator |  "vg": [] 2026-01-07 00:45:25.488405 | orchestrator |  } 2026-01-07 00:45:25.488409 | orchestrator | } 2026-01-07 00:45:25.488413 | orchestrator | 2026-01-07 00:45:25.488417 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-07 00:45:25.488421 | orchestrator | Wednesday 07 January 2026 00:45:22 +0000 (0:00:00.147) 0:00:18.187 ***** 2026-01-07 00:45:25.488424 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488428 | orchestrator | 2026-01-07 00:45:25.488440 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-07 00:45:25.488444 | orchestrator | Wednesday 07 January 2026 00:45:22 +0000 (0:00:00.126) 0:00:18.314 ***** 2026-01-07 00:45:25.488448 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488452 | orchestrator | 2026-01-07 00:45:25.488456 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-07 00:45:25.488460 | orchestrator | Wednesday 07 January 2026 00:45:22 +0000 (0:00:00.143) 0:00:18.457 ***** 2026-01-07 00:45:25.488463 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488467 | orchestrator | 2026-01-07 00:45:25.488471 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-07 00:45:25.488475 | orchestrator | Wednesday 07 January 2026 00:45:22 +0000 (0:00:00.334) 0:00:18.792 ***** 2026-01-07 00:45:25.488479 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488482 | orchestrator | 2026-01-07 00:45:25.488486 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-07 00:45:25.488490 | orchestrator | Wednesday 07 January 2026 00:45:22 +0000 (0:00:00.171) 0:00:18.963 ***** 2026-01-07 00:45:25.488494 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488498 | orchestrator | 2026-01-07 00:45:25.488502 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-07 00:45:25.488506 | orchestrator | Wednesday 07 January 2026 00:45:22 +0000 (0:00:00.149) 0:00:19.112 ***** 2026-01-07 00:45:25.488509 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488513 | orchestrator | 2026-01-07 00:45:25.488517 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-07 00:45:25.488521 | orchestrator | Wednesday 07 January 2026 00:45:23 +0000 (0:00:00.154) 0:00:19.267 ***** 2026-01-07 00:45:25.488524 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488529 | orchestrator | 2026-01-07 00:45:25.488535 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-07 00:45:25.488542 | orchestrator | Wednesday 07 January 2026 00:45:23 +0000 (0:00:00.151) 0:00:19.419 ***** 2026-01-07 00:45:25.488561 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488568 | orchestrator | 2026-01-07 00:45:25.488575 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-07 00:45:25.488581 | orchestrator | Wednesday 07 January 2026 00:45:23 +0000 (0:00:00.166) 0:00:19.585 ***** 2026-01-07 00:45:25.488588 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488595 | orchestrator | 2026-01-07 00:45:25.488603 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-07 00:45:25.488607 | orchestrator | Wednesday 07 January 2026 00:45:23 +0000 (0:00:00.137) 0:00:19.722 ***** 2026-01-07 00:45:25.488612 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488617 | orchestrator | 2026-01-07 00:45:25.488624 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-07 00:45:25.488630 | orchestrator | Wednesday 07 January 2026 00:45:23 +0000 (0:00:00.153) 0:00:19.876 ***** 2026-01-07 00:45:25.488636 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488643 | orchestrator | 2026-01-07 00:45:25.488649 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-07 00:45:25.488656 | orchestrator | Wednesday 07 January 2026 00:45:23 +0000 (0:00:00.139) 0:00:20.016 ***** 2026-01-07 00:45:25.488669 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488676 | orchestrator | 2026-01-07 00:45:25.488687 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-07 00:45:25.488699 | orchestrator | Wednesday 07 January 2026 00:45:24 +0000 (0:00:00.125) 0:00:20.141 ***** 2026-01-07 00:45:25.488710 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488723 | orchestrator | 2026-01-07 00:45:25.488730 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-07 00:45:25.488737 | orchestrator | Wednesday 07 January 2026 00:45:24 +0000 (0:00:00.173) 0:00:20.315 ***** 2026-01-07 00:45:25.488743 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488749 | orchestrator | 2026-01-07 00:45:25.488756 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-07 00:45:25.488763 | orchestrator | Wednesday 07 January 2026 00:45:24 +0000 (0:00:00.150) 0:00:20.465 ***** 2026-01-07 00:45:25.488770 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:25.488778 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:25.488785 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488791 | orchestrator | 2026-01-07 00:45:25.488798 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-07 00:45:25.488805 | orchestrator | Wednesday 07 January 2026 00:45:24 +0000 (0:00:00.359) 0:00:20.825 ***** 2026-01-07 00:45:25.488813 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:25.488820 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:25.488827 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488834 | orchestrator | 2026-01-07 00:45:25.488842 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-07 00:45:25.488853 | orchestrator | Wednesday 07 January 2026 00:45:24 +0000 (0:00:00.174) 0:00:20.999 ***** 2026-01-07 00:45:25.488866 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:25.488878 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:25.488887 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488894 | orchestrator | 2026-01-07 00:45:25.488900 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-07 00:45:25.488906 | orchestrator | Wednesday 07 January 2026 00:45:25 +0000 (0:00:00.177) 0:00:21.177 ***** 2026-01-07 00:45:25.488913 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:25.488919 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:25.488926 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488932 | orchestrator | 2026-01-07 00:45:25.488939 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-07 00:45:25.488945 | orchestrator | Wednesday 07 January 2026 00:45:25 +0000 (0:00:00.145) 0:00:21.323 ***** 2026-01-07 00:45:25.488952 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:25.488958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:25.488969 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:25.488976 | orchestrator | 2026-01-07 00:45:25.488982 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-07 00:45:25.488989 | orchestrator | Wednesday 07 January 2026 00:45:25 +0000 (0:00:00.149) 0:00:21.473 ***** 2026-01-07 00:45:25.489001 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:30.975716 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:30.975787 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:30.975795 | orchestrator | 2026-01-07 00:45:30.975799 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-07 00:45:30.975803 | orchestrator | Wednesday 07 January 2026 00:45:25 +0000 (0:00:00.150) 0:00:21.623 ***** 2026-01-07 00:45:30.975811 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:30.975815 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:30.975818 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:30.975822 | orchestrator | 2026-01-07 00:45:30.975825 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-07 00:45:30.975828 | orchestrator | Wednesday 07 January 2026 00:45:25 +0000 (0:00:00.148) 0:00:21.772 ***** 2026-01-07 00:45:30.975831 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:30.975835 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:30.975838 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:30.975841 | orchestrator | 2026-01-07 00:45:30.975844 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-07 00:45:30.975847 | orchestrator | Wednesday 07 January 2026 00:45:25 +0000 (0:00:00.149) 0:00:21.922 ***** 2026-01-07 00:45:30.975850 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:45:30.975854 | orchestrator | 2026-01-07 00:45:30.975857 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-07 00:45:30.975861 | orchestrator | Wednesday 07 January 2026 00:45:26 +0000 (0:00:00.476) 0:00:22.398 ***** 2026-01-07 00:45:30.975864 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:45:30.975867 | orchestrator | 2026-01-07 00:45:30.975870 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-07 00:45:30.975873 | orchestrator | Wednesday 07 January 2026 00:45:26 +0000 (0:00:00.481) 0:00:22.879 ***** 2026-01-07 00:45:30.975876 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:45:30.975879 | orchestrator | 2026-01-07 00:45:30.975882 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-07 00:45:30.975886 | orchestrator | Wednesday 07 January 2026 00:45:26 +0000 (0:00:00.155) 0:00:23.035 ***** 2026-01-07 00:45:30.975889 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'vg_name': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'}) 2026-01-07 00:45:30.975893 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'vg_name': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'}) 2026-01-07 00:45:30.975901 | orchestrator | 2026-01-07 00:45:30.975904 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-07 00:45:30.975907 | orchestrator | Wednesday 07 January 2026 00:45:27 +0000 (0:00:00.200) 0:00:23.235 ***** 2026-01-07 00:45:30.975910 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:30.975923 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:30.975927 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:30.975930 | orchestrator | 2026-01-07 00:45:30.975933 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-07 00:45:30.975936 | orchestrator | Wednesday 07 January 2026 00:45:27 +0000 (0:00:00.375) 0:00:23.610 ***** 2026-01-07 00:45:30.975939 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:30.975943 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:30.975946 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:30.975949 | orchestrator | 2026-01-07 00:45:30.975952 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-07 00:45:30.975955 | orchestrator | Wednesday 07 January 2026 00:45:27 +0000 (0:00:00.171) 0:00:23.782 ***** 2026-01-07 00:45:30.975958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'})  2026-01-07 00:45:30.975962 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'})  2026-01-07 00:45:30.975965 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:45:30.975968 | orchestrator | 2026-01-07 00:45:30.975971 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-07 00:45:30.975974 | orchestrator | Wednesday 07 January 2026 00:45:27 +0000 (0:00:00.167) 0:00:23.949 ***** 2026-01-07 00:45:30.975986 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 00:45:30.975990 | orchestrator |  "lvm_report": { 2026-01-07 00:45:30.975993 | orchestrator |  "lv": [ 2026-01-07 00:45:30.975996 | orchestrator |  { 2026-01-07 00:45:30.975999 | orchestrator |  "lv_name": "osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b", 2026-01-07 00:45:30.976003 | orchestrator |  "vg_name": "ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b" 2026-01-07 00:45:30.976006 | orchestrator |  }, 2026-01-07 00:45:30.976009 | orchestrator |  { 2026-01-07 00:45:30.976012 | orchestrator |  "lv_name": "osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1", 2026-01-07 00:45:30.976015 | orchestrator |  "vg_name": "ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1" 2026-01-07 00:45:30.976019 | orchestrator |  } 2026-01-07 00:45:30.976022 | orchestrator |  ], 2026-01-07 00:45:30.976025 | orchestrator |  "pv": [ 2026-01-07 00:45:30.976028 | orchestrator |  { 2026-01-07 00:45:30.976031 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-07 00:45:30.976034 | orchestrator |  "vg_name": "ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b" 2026-01-07 00:45:30.976037 | orchestrator |  }, 2026-01-07 00:45:30.976041 | orchestrator |  { 2026-01-07 00:45:30.976046 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-07 00:45:30.976061 | orchestrator |  "vg_name": "ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1" 2026-01-07 00:45:30.976066 | orchestrator |  } 2026-01-07 00:45:30.976108 | orchestrator |  ] 2026-01-07 00:45:30.976116 | orchestrator |  } 2026-01-07 00:45:30.976121 | orchestrator | } 2026-01-07 00:45:30.976128 | orchestrator | 2026-01-07 00:45:30.976133 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-07 00:45:30.976139 | orchestrator | 2026-01-07 00:45:30.976146 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:45:30.976152 | orchestrator | Wednesday 07 January 2026 00:45:28 +0000 (0:00:00.285) 0:00:24.235 ***** 2026-01-07 00:45:30.976166 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-07 00:45:30.976172 | orchestrator | 2026-01-07 00:45:30.976178 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:45:30.976184 | orchestrator | Wednesday 07 January 2026 00:45:28 +0000 (0:00:00.257) 0:00:24.493 ***** 2026-01-07 00:45:30.976189 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:45:30.976192 | orchestrator | 2026-01-07 00:45:30.976195 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:30.976198 | orchestrator | Wednesday 07 January 2026 00:45:28 +0000 (0:00:00.240) 0:00:24.733 ***** 2026-01-07 00:45:30.976202 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-07 00:45:30.976205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-07 00:45:30.976208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-07 00:45:30.976211 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-07 00:45:30.976214 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-07 00:45:30.976217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-07 00:45:30.976220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-07 00:45:30.976226 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-07 00:45:30.976229 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-07 00:45:30.976232 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-07 00:45:30.976236 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-07 00:45:30.976238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-07 00:45:30.976241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-07 00:45:30.976244 | orchestrator | 2026-01-07 00:45:30.976248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:30.976251 | orchestrator | Wednesday 07 January 2026 00:45:29 +0000 (0:00:00.410) 0:00:25.144 ***** 2026-01-07 00:45:30.976254 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:30.976257 | orchestrator | 2026-01-07 00:45:30.976260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:30.976263 | orchestrator | Wednesday 07 January 2026 00:45:29 +0000 (0:00:00.237) 0:00:25.382 ***** 2026-01-07 00:45:30.976266 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:30.976269 | orchestrator | 2026-01-07 00:45:30.976272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:30.976275 | orchestrator | Wednesday 07 January 2026 00:45:29 +0000 (0:00:00.237) 0:00:25.619 ***** 2026-01-07 00:45:30.976278 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:30.976281 | orchestrator | 2026-01-07 00:45:30.976284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:30.976287 | orchestrator | Wednesday 07 January 2026 00:45:30 +0000 (0:00:00.727) 0:00:26.346 ***** 2026-01-07 00:45:30.976290 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:30.976293 | orchestrator | 2026-01-07 00:45:30.976296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:30.976299 | orchestrator | Wednesday 07 January 2026 00:45:30 +0000 (0:00:00.248) 0:00:26.595 ***** 2026-01-07 00:45:30.976303 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:30.976306 | orchestrator | 2026-01-07 00:45:30.976310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:30.976314 | orchestrator | Wednesday 07 January 2026 00:45:30 +0000 (0:00:00.270) 0:00:26.866 ***** 2026-01-07 00:45:30.976359 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:30.976363 | orchestrator | 2026-01-07 00:45:30.976372 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:42.974744 | orchestrator | Wednesday 07 January 2026 00:45:30 +0000 (0:00:00.240) 0:00:27.107 ***** 2026-01-07 00:45:42.974802 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:42.974812 | orchestrator | 2026-01-07 00:45:42.974819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:42.974826 | orchestrator | Wednesday 07 January 2026 00:45:31 +0000 (0:00:00.257) 0:00:27.364 ***** 2026-01-07 00:45:42.974833 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:42.974843 | orchestrator | 2026-01-07 00:45:42.974847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:42.974851 | orchestrator | Wednesday 07 January 2026 00:45:31 +0000 (0:00:00.226) 0:00:27.591 ***** 2026-01-07 00:45:42.974856 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7) 2026-01-07 00:45:42.974864 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7) 2026-01-07 00:45:42.974870 | orchestrator | 2026-01-07 00:45:42.974876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:42.974883 | orchestrator | Wednesday 07 January 2026 00:45:31 +0000 (0:00:00.516) 0:00:28.107 ***** 2026-01-07 00:45:42.974889 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e8953730-7f10-4622-86b0-9bd54769baab) 2026-01-07 00:45:42.974895 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e8953730-7f10-4622-86b0-9bd54769baab) 2026-01-07 00:45:42.974901 | orchestrator | 2026-01-07 00:45:42.974908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:42.974913 | orchestrator | Wednesday 07 January 2026 00:45:32 +0000 (0:00:00.465) 0:00:28.573 ***** 2026-01-07 00:45:42.974920 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2778d154-06c9-4d37-b4c8-396dcdd5fdf1) 2026-01-07 00:45:42.974926 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2778d154-06c9-4d37-b4c8-396dcdd5fdf1) 2026-01-07 00:45:42.974933 | orchestrator | 2026-01-07 00:45:42.974939 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:42.974946 | orchestrator | Wednesday 07 January 2026 00:45:32 +0000 (0:00:00.484) 0:00:29.058 ***** 2026-01-07 00:45:42.974953 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f78b2b96-168b-421a-aa15-4bebe7f5a151) 2026-01-07 00:45:42.974959 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f78b2b96-168b-421a-aa15-4bebe7f5a151) 2026-01-07 00:45:42.974975 | orchestrator | 2026-01-07 00:45:42.974982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:42.974988 | orchestrator | Wednesday 07 January 2026 00:45:33 +0000 (0:00:00.716) 0:00:29.775 ***** 2026-01-07 00:45:42.974995 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:45:42.975001 | orchestrator | 2026-01-07 00:45:42.975008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:42.975015 | orchestrator | Wednesday 07 January 2026 00:45:34 +0000 (0:00:00.803) 0:00:30.578 ***** 2026-01-07 00:45:42.975031 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-07 00:45:42.975035 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-07 00:45:42.975039 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-07 00:45:42.975043 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-07 00:45:42.975047 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-07 00:45:42.975051 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-07 00:45:42.975101 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-07 00:45:42.975107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-07 00:45:42.975111 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-07 00:45:42.975114 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-07 00:45:42.975118 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-07 00:45:42.975122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-07 00:45:42.975126 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-07 00:45:42.975129 | orchestrator | 2026-01-07 00:45:42.975133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:42.975137 | orchestrator | Wednesday 07 January 2026 00:45:35 +0000 (0:00:01.191) 0:00:31.770 ***** 2026-01-07 00:45:42.975141 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:42.975144 | orchestrator | 2026-01-07 00:45:42.975148 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:42.975153 | orchestrator | Wednesday 07 January 2026 00:45:35 +0000 (0:00:00.254) 0:00:32.025 ***** 2026-01-07 00:45:42.975156 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:42.975160 | orchestrator | 2026-01-07 00:45:42.975164 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:42.975168 | orchestrator | Wednesday 07 January 2026 00:45:36 +0000 (0:00:00.304) 0:00:32.329 ***** 2026-01-07 00:45:42.975171 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:42.975175 | orchestrator | 2026-01-07 00:45:42.975189 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:42.975198 | orchestrator | Wednesday 07 January 2026 00:45:36 +0000 (0:00:00.222) 0:00:32.552 ***** 2026-01-07 00:45:42.975202 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:42.975206 | orchestrator | 2026-01-07 00:45:42.975214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:42.975218 | orchestrator | Wednesday 07 January 2026 00:45:36 +0000 (0:00:00.280) 0:00:32.832 ***** 2026-01-07 00:45:42.975221 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:42.975225 | orchestrator | 2026-01-07 00:45:42.975229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:42.975233 | orchestrator | Wednesday 07 January 2026 00:45:36 +0000 (0:00:00.268) 0:00:33.101 ***** 2026-01-07 00:45:42.975236 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:42.975240 | orchestrator | 2026-01-07 00:45:42.975244 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:42.975247 | orchestrator | Wednesday 07 January 2026 00:45:37 +0000 (0:00:00.318) 0:00:33.420 ***** 2026-01-07 00:45:42.975251 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:42.975255 | orchestrator | 2026-01-07 00:45:42.975259 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:42.975262 | orchestrator | Wednesday 07 January 2026 00:45:37 +0000 (0:00:00.254) 0:00:33.674 ***** 2026-01-07 00:45:42.975266 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:42.975270 | orchestrator | 2026-01-07 00:45:42.975274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:42.975277 | orchestrator | Wednesday 07 January 2026 00:45:37 +0000 (0:00:00.208) 0:00:33.882 ***** 2026-01-07 00:45:42.975281 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-07 00:45:42.975285 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-07 00:45:42.975289 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-07 00:45:42.975293 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-07 00:45:42.975296 | orchestrator | 2026-01-07 00:45:42.975300 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:42.975307 | orchestrator | Wednesday 07 January 2026 00:45:38 +0000 (0:00:00.718) 0:00:34.601 ***** 2026-01-07 00:45:42.975311 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:42.975315 | orchestrator | 2026-01-07 00:45:42.975319 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:42.975323 | orchestrator | Wednesday 07 January 2026 00:45:38 +0000 (0:00:00.185) 0:00:34.786 ***** 2026-01-07 00:45:42.975326 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:42.975330 | orchestrator | 2026-01-07 00:45:42.975334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:42.975337 | orchestrator | Wednesday 07 January 2026 00:45:39 +0000 (0:00:00.463) 0:00:35.250 ***** 2026-01-07 00:45:42.975341 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:42.975345 | orchestrator | 2026-01-07 00:45:42.975349 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:42.975352 | orchestrator | Wednesday 07 January 2026 00:45:39 +0000 (0:00:00.151) 0:00:35.402 ***** 2026-01-07 00:45:42.975356 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:42.975360 | orchestrator | 2026-01-07 00:45:42.975363 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-07 00:45:42.975367 | orchestrator | Wednesday 07 January 2026 00:45:39 +0000 (0:00:00.170) 0:00:35.572 ***** 2026-01-07 00:45:42.975371 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:42.975375 | orchestrator | 2026-01-07 00:45:42.975379 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-07 00:45:42.975383 | orchestrator | Wednesday 07 January 2026 00:45:39 +0000 (0:00:00.145) 0:00:35.717 ***** 2026-01-07 00:45:42.975387 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0b3967c5-6312-5066-b0c3-d93b1266106e'}}) 2026-01-07 00:45:42.975391 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'}}) 2026-01-07 00:45:42.975395 | orchestrator | 2026-01-07 00:45:42.975399 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-07 00:45:42.975403 | orchestrator | Wednesday 07 January 2026 00:45:39 +0000 (0:00:00.187) 0:00:35.905 ***** 2026-01-07 00:45:42.975407 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'}) 2026-01-07 00:45:42.975412 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'}) 2026-01-07 00:45:42.975416 | orchestrator | 2026-01-07 00:45:42.975420 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-07 00:45:42.975424 | orchestrator | Wednesday 07 January 2026 00:45:41 +0000 (0:00:01.774) 0:00:37.680 ***** 2026-01-07 00:45:42.975428 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:42.975433 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:42.975437 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:42.975441 | orchestrator | 2026-01-07 00:45:42.975444 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-07 00:45:42.975448 | orchestrator | Wednesday 07 January 2026 00:45:41 +0000 (0:00:00.155) 0:00:37.835 ***** 2026-01-07 00:45:42.975452 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'}) 2026-01-07 00:45:42.975460 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'}) 2026-01-07 00:45:48.219709 | orchestrator | 2026-01-07 00:45:48.219775 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-07 00:45:48.219794 | orchestrator | Wednesday 07 January 2026 00:45:42 +0000 (0:00:01.272) 0:00:39.107 ***** 2026-01-07 00:45:48.219809 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:48.219815 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:48.219819 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.219824 | orchestrator | 2026-01-07 00:45:48.219829 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-07 00:45:48.219833 | orchestrator | Wednesday 07 January 2026 00:45:43 +0000 (0:00:00.143) 0:00:39.251 ***** 2026-01-07 00:45:48.219836 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.219841 | orchestrator | 2026-01-07 00:45:48.219845 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-07 00:45:48.219849 | orchestrator | Wednesday 07 January 2026 00:45:43 +0000 (0:00:00.112) 0:00:39.364 ***** 2026-01-07 00:45:48.219853 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:48.219857 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:48.219861 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.219864 | orchestrator | 2026-01-07 00:45:48.219868 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-07 00:45:48.219872 | orchestrator | Wednesday 07 January 2026 00:45:43 +0000 (0:00:00.142) 0:00:39.506 ***** 2026-01-07 00:45:48.219876 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.219880 | orchestrator | 2026-01-07 00:45:48.219884 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-07 00:45:48.219888 | orchestrator | Wednesday 07 January 2026 00:45:43 +0000 (0:00:00.120) 0:00:39.627 ***** 2026-01-07 00:45:48.219892 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:48.219896 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:48.219900 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.219904 | orchestrator | 2026-01-07 00:45:48.219908 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-07 00:45:48.219914 | orchestrator | Wednesday 07 January 2026 00:45:43 +0000 (0:00:00.316) 0:00:39.944 ***** 2026-01-07 00:45:48.219918 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.219922 | orchestrator | 2026-01-07 00:45:48.219926 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-07 00:45:48.219930 | orchestrator | Wednesday 07 January 2026 00:45:43 +0000 (0:00:00.130) 0:00:40.074 ***** 2026-01-07 00:45:48.219934 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:48.219938 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:48.219942 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.219945 | orchestrator | 2026-01-07 00:45:48.219949 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-07 00:45:48.219953 | orchestrator | Wednesday 07 January 2026 00:45:44 +0000 (0:00:00.118) 0:00:40.193 ***** 2026-01-07 00:45:48.219957 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:45:48.219962 | orchestrator | 2026-01-07 00:45:48.219966 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-07 00:45:48.219973 | orchestrator | Wednesday 07 January 2026 00:45:44 +0000 (0:00:00.126) 0:00:40.319 ***** 2026-01-07 00:45:48.219977 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:48.219981 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:48.219985 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.219989 | orchestrator | 2026-01-07 00:45:48.219993 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-07 00:45:48.219997 | orchestrator | Wednesday 07 January 2026 00:45:44 +0000 (0:00:00.134) 0:00:40.454 ***** 2026-01-07 00:45:48.220001 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:48.220005 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:48.220009 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.220013 | orchestrator | 2026-01-07 00:45:48.220017 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-07 00:45:48.220030 | orchestrator | Wednesday 07 January 2026 00:45:44 +0000 (0:00:00.147) 0:00:40.602 ***** 2026-01-07 00:45:48.220035 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:48.220039 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:48.220043 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.220047 | orchestrator | 2026-01-07 00:45:48.220051 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-07 00:45:48.220055 | orchestrator | Wednesday 07 January 2026 00:45:44 +0000 (0:00:00.141) 0:00:40.744 ***** 2026-01-07 00:45:48.220084 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.220088 | orchestrator | 2026-01-07 00:45:48.220092 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-07 00:45:48.220095 | orchestrator | Wednesday 07 January 2026 00:45:44 +0000 (0:00:00.126) 0:00:40.871 ***** 2026-01-07 00:45:48.220099 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.220103 | orchestrator | 2026-01-07 00:45:48.220107 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-07 00:45:48.220110 | orchestrator | Wednesday 07 January 2026 00:45:44 +0000 (0:00:00.110) 0:00:40.981 ***** 2026-01-07 00:45:48.220114 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.220118 | orchestrator | 2026-01-07 00:45:48.220121 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-07 00:45:48.220125 | orchestrator | Wednesday 07 January 2026 00:45:44 +0000 (0:00:00.104) 0:00:41.085 ***** 2026-01-07 00:45:48.220129 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:45:48.220133 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-07 00:45:48.220137 | orchestrator | } 2026-01-07 00:45:48.220141 | orchestrator | 2026-01-07 00:45:48.220144 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-07 00:45:48.220148 | orchestrator | Wednesday 07 January 2026 00:45:45 +0000 (0:00:00.122) 0:00:41.208 ***** 2026-01-07 00:45:48.220152 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:45:48.220156 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-07 00:45:48.220159 | orchestrator | } 2026-01-07 00:45:48.220163 | orchestrator | 2026-01-07 00:45:48.220167 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-07 00:45:48.220170 | orchestrator | Wednesday 07 January 2026 00:45:45 +0000 (0:00:00.132) 0:00:41.340 ***** 2026-01-07 00:45:48.220177 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:45:48.220182 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-07 00:45:48.220186 | orchestrator | } 2026-01-07 00:45:48.220189 | orchestrator | 2026-01-07 00:45:48.220193 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-07 00:45:48.220197 | orchestrator | Wednesday 07 January 2026 00:45:45 +0000 (0:00:00.297) 0:00:41.637 ***** 2026-01-07 00:45:48.220200 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:45:48.220204 | orchestrator | 2026-01-07 00:45:48.220208 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-07 00:45:48.220215 | orchestrator | Wednesday 07 January 2026 00:45:46 +0000 (0:00:00.515) 0:00:42.153 ***** 2026-01-07 00:45:48.220218 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:45:48.220222 | orchestrator | 2026-01-07 00:45:48.220226 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-07 00:45:48.220230 | orchestrator | Wednesday 07 January 2026 00:45:46 +0000 (0:00:00.483) 0:00:42.637 ***** 2026-01-07 00:45:48.220233 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:45:48.220237 | orchestrator | 2026-01-07 00:45:48.220241 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-07 00:45:48.220244 | orchestrator | Wednesday 07 January 2026 00:45:47 +0000 (0:00:00.552) 0:00:43.189 ***** 2026-01-07 00:45:48.220248 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:45:48.220252 | orchestrator | 2026-01-07 00:45:48.220256 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-07 00:45:48.220259 | orchestrator | Wednesday 07 January 2026 00:45:47 +0000 (0:00:00.150) 0:00:43.340 ***** 2026-01-07 00:45:48.220263 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.220267 | orchestrator | 2026-01-07 00:45:48.220271 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-07 00:45:48.220274 | orchestrator | Wednesday 07 January 2026 00:45:47 +0000 (0:00:00.132) 0:00:43.472 ***** 2026-01-07 00:45:48.220278 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.220282 | orchestrator | 2026-01-07 00:45:48.220286 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-07 00:45:48.220291 | orchestrator | Wednesday 07 January 2026 00:45:47 +0000 (0:00:00.130) 0:00:43.603 ***** 2026-01-07 00:45:48.220295 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:45:48.220300 | orchestrator |  "vgs_report": { 2026-01-07 00:45:48.220304 | orchestrator |  "vg": [] 2026-01-07 00:45:48.220309 | orchestrator |  } 2026-01-07 00:45:48.220313 | orchestrator | } 2026-01-07 00:45:48.220318 | orchestrator | 2026-01-07 00:45:48.220322 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-07 00:45:48.220327 | orchestrator | Wednesday 07 January 2026 00:45:47 +0000 (0:00:00.155) 0:00:43.758 ***** 2026-01-07 00:45:48.220331 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.220335 | orchestrator | 2026-01-07 00:45:48.220339 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-07 00:45:48.220344 | orchestrator | Wednesday 07 January 2026 00:45:47 +0000 (0:00:00.148) 0:00:43.907 ***** 2026-01-07 00:45:48.220348 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.220353 | orchestrator | 2026-01-07 00:45:48.220357 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-07 00:45:48.220361 | orchestrator | Wednesday 07 January 2026 00:45:47 +0000 (0:00:00.158) 0:00:44.066 ***** 2026-01-07 00:45:48.220366 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.220370 | orchestrator | 2026-01-07 00:45:48.220374 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-07 00:45:48.220379 | orchestrator | Wednesday 07 January 2026 00:45:48 +0000 (0:00:00.140) 0:00:44.206 ***** 2026-01-07 00:45:48.220383 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:48.220390 | orchestrator | 2026-01-07 00:45:48.220400 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-07 00:45:53.018333 | orchestrator | Wednesday 07 January 2026 00:45:48 +0000 (0:00:00.147) 0:00:44.354 ***** 2026-01-07 00:45:53.018408 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018417 | orchestrator | 2026-01-07 00:45:53.018424 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-07 00:45:53.018430 | orchestrator | Wednesday 07 January 2026 00:45:48 +0000 (0:00:00.339) 0:00:44.694 ***** 2026-01-07 00:45:53.018436 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018443 | orchestrator | 2026-01-07 00:45:53.018449 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-07 00:45:53.018455 | orchestrator | Wednesday 07 January 2026 00:45:48 +0000 (0:00:00.164) 0:00:44.858 ***** 2026-01-07 00:45:53.018462 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018468 | orchestrator | 2026-01-07 00:45:53.018475 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-07 00:45:53.018481 | orchestrator | Wednesday 07 January 2026 00:45:48 +0000 (0:00:00.147) 0:00:45.005 ***** 2026-01-07 00:45:53.018488 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018494 | orchestrator | 2026-01-07 00:45:53.018501 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-07 00:45:53.018507 | orchestrator | Wednesday 07 January 2026 00:45:49 +0000 (0:00:00.145) 0:00:45.151 ***** 2026-01-07 00:45:53.018514 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018520 | orchestrator | 2026-01-07 00:45:53.018524 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-07 00:45:53.018528 | orchestrator | Wednesday 07 January 2026 00:45:49 +0000 (0:00:00.166) 0:00:45.317 ***** 2026-01-07 00:45:53.018532 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018535 | orchestrator | 2026-01-07 00:45:53.018539 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-07 00:45:53.018543 | orchestrator | Wednesday 07 January 2026 00:45:49 +0000 (0:00:00.136) 0:00:45.453 ***** 2026-01-07 00:45:53.018546 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018550 | orchestrator | 2026-01-07 00:45:53.018554 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-07 00:45:53.018558 | orchestrator | Wednesday 07 January 2026 00:45:49 +0000 (0:00:00.124) 0:00:45.577 ***** 2026-01-07 00:45:53.018561 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018565 | orchestrator | 2026-01-07 00:45:53.018569 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-07 00:45:53.018573 | orchestrator | Wednesday 07 January 2026 00:45:49 +0000 (0:00:00.152) 0:00:45.729 ***** 2026-01-07 00:45:53.018576 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018580 | orchestrator | 2026-01-07 00:45:53.018584 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-07 00:45:53.018587 | orchestrator | Wednesday 07 January 2026 00:45:49 +0000 (0:00:00.145) 0:00:45.875 ***** 2026-01-07 00:45:53.018591 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018595 | orchestrator | 2026-01-07 00:45:53.018599 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-07 00:45:53.018602 | orchestrator | Wednesday 07 January 2026 00:45:49 +0000 (0:00:00.142) 0:00:46.017 ***** 2026-01-07 00:45:53.018607 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:53.018612 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:53.018616 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018619 | orchestrator | 2026-01-07 00:45:53.018623 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-07 00:45:53.018627 | orchestrator | Wednesday 07 January 2026 00:45:50 +0000 (0:00:00.166) 0:00:46.184 ***** 2026-01-07 00:45:53.018631 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:53.018641 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:53.018645 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018649 | orchestrator | 2026-01-07 00:45:53.018653 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-07 00:45:53.018659 | orchestrator | Wednesday 07 January 2026 00:45:50 +0000 (0:00:00.163) 0:00:46.347 ***** 2026-01-07 00:45:53.018666 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:53.018672 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:53.018679 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018686 | orchestrator | 2026-01-07 00:45:53.018692 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-07 00:45:53.018699 | orchestrator | Wednesday 07 January 2026 00:45:50 +0000 (0:00:00.431) 0:00:46.779 ***** 2026-01-07 00:45:53.018705 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:53.018711 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:53.018718 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018724 | orchestrator | 2026-01-07 00:45:53.018741 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-07 00:45:53.018748 | orchestrator | Wednesday 07 January 2026 00:45:50 +0000 (0:00:00.151) 0:00:46.930 ***** 2026-01-07 00:45:53.018755 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:53.018759 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:53.018763 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018766 | orchestrator | 2026-01-07 00:45:53.018770 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-07 00:45:53.018774 | orchestrator | Wednesday 07 January 2026 00:45:50 +0000 (0:00:00.145) 0:00:47.076 ***** 2026-01-07 00:45:53.018778 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:53.018782 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:53.018786 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018789 | orchestrator | 2026-01-07 00:45:53.018793 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-07 00:45:53.018797 | orchestrator | Wednesday 07 January 2026 00:45:51 +0000 (0:00:00.139) 0:00:47.216 ***** 2026-01-07 00:45:53.018823 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:53.018827 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:53.018831 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018835 | orchestrator | 2026-01-07 00:45:53.018839 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-07 00:45:53.018842 | orchestrator | Wednesday 07 January 2026 00:45:51 +0000 (0:00:00.146) 0:00:47.362 ***** 2026-01-07 00:45:53.018846 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:53.018854 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:53.018860 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.018864 | orchestrator | 2026-01-07 00:45:53.018867 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-07 00:45:53.018871 | orchestrator | Wednesday 07 January 2026 00:45:51 +0000 (0:00:00.142) 0:00:47.505 ***** 2026-01-07 00:45:53.018875 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:45:53.018879 | orchestrator | 2026-01-07 00:45:53.018883 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-07 00:45:53.018886 | orchestrator | Wednesday 07 January 2026 00:45:51 +0000 (0:00:00.470) 0:00:47.976 ***** 2026-01-07 00:45:53.018890 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:45:53.018895 | orchestrator | 2026-01-07 00:45:53.018901 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-07 00:45:53.018908 | orchestrator | Wednesday 07 January 2026 00:45:52 +0000 (0:00:00.516) 0:00:48.492 ***** 2026-01-07 00:45:53.018915 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:45:53.018922 | orchestrator | 2026-01-07 00:45:53.018929 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-07 00:45:53.018945 | orchestrator | Wednesday 07 January 2026 00:45:52 +0000 (0:00:00.160) 0:00:48.652 ***** 2026-01-07 00:45:53.018952 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'vg_name': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'}) 2026-01-07 00:45:53.018959 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'vg_name': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'}) 2026-01-07 00:45:53.018966 | orchestrator | 2026-01-07 00:45:53.018972 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-07 00:45:53.018978 | orchestrator | Wednesday 07 January 2026 00:45:52 +0000 (0:00:00.161) 0:00:48.814 ***** 2026-01-07 00:45:53.018985 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:53.018992 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:53.018999 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:53.019006 | orchestrator | 2026-01-07 00:45:53.019013 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-07 00:45:53.019019 | orchestrator | Wednesday 07 January 2026 00:45:52 +0000 (0:00:00.186) 0:00:49.000 ***** 2026-01-07 00:45:53.019026 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:53.019037 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:59.326105 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:59.326192 | orchestrator | 2026-01-07 00:45:59.326200 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-07 00:45:59.326207 | orchestrator | Wednesday 07 January 2026 00:45:53 +0000 (0:00:00.152) 0:00:49.153 ***** 2026-01-07 00:45:59.326211 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'})  2026-01-07 00:45:59.326217 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'})  2026-01-07 00:45:59.326221 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:45:59.326225 | orchestrator | 2026-01-07 00:45:59.326229 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-07 00:45:59.326250 | orchestrator | Wednesday 07 January 2026 00:45:53 +0000 (0:00:00.146) 0:00:49.299 ***** 2026-01-07 00:45:59.326254 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 00:45:59.326258 | orchestrator |  "lvm_report": { 2026-01-07 00:45:59.326263 | orchestrator |  "lv": [ 2026-01-07 00:45:59.326268 | orchestrator |  { 2026-01-07 00:45:59.326272 | orchestrator |  "lv_name": "osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e", 2026-01-07 00:45:59.326277 | orchestrator |  "vg_name": "ceph-0b3967c5-6312-5066-b0c3-d93b1266106e" 2026-01-07 00:45:59.326281 | orchestrator |  }, 2026-01-07 00:45:59.326285 | orchestrator |  { 2026-01-07 00:45:59.326289 | orchestrator |  "lv_name": "osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1", 2026-01-07 00:45:59.326293 | orchestrator |  "vg_name": "ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1" 2026-01-07 00:45:59.326296 | orchestrator |  } 2026-01-07 00:45:59.326300 | orchestrator |  ], 2026-01-07 00:45:59.326304 | orchestrator |  "pv": [ 2026-01-07 00:45:59.326308 | orchestrator |  { 2026-01-07 00:45:59.326311 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-07 00:45:59.326315 | orchestrator |  "vg_name": "ceph-0b3967c5-6312-5066-b0c3-d93b1266106e" 2026-01-07 00:45:59.326319 | orchestrator |  }, 2026-01-07 00:45:59.326323 | orchestrator |  { 2026-01-07 00:45:59.326327 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-07 00:45:59.326331 | orchestrator |  "vg_name": "ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1" 2026-01-07 00:45:59.326334 | orchestrator |  } 2026-01-07 00:45:59.326338 | orchestrator |  ] 2026-01-07 00:45:59.326342 | orchestrator |  } 2026-01-07 00:45:59.326346 | orchestrator | } 2026-01-07 00:45:59.326350 | orchestrator | 2026-01-07 00:45:59.326354 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-07 00:45:59.326357 | orchestrator | 2026-01-07 00:45:59.326361 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-07 00:45:59.326365 | orchestrator | Wednesday 07 January 2026 00:45:53 +0000 (0:00:00.503) 0:00:49.803 ***** 2026-01-07 00:45:59.326379 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-07 00:45:59.326383 | orchestrator | 2026-01-07 00:45:59.326387 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-07 00:45:59.326391 | orchestrator | Wednesday 07 January 2026 00:45:53 +0000 (0:00:00.280) 0:00:50.083 ***** 2026-01-07 00:45:59.326396 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:45:59.326402 | orchestrator | 2026-01-07 00:45:59.326408 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:59.326415 | orchestrator | Wednesday 07 January 2026 00:45:54 +0000 (0:00:00.265) 0:00:50.349 ***** 2026-01-07 00:45:59.326422 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-07 00:45:59.326429 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-07 00:45:59.326435 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-07 00:45:59.326441 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-07 00:45:59.326448 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-07 00:45:59.326454 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-07 00:45:59.326461 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-07 00:45:59.326467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-07 00:45:59.326473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-07 00:45:59.326479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-07 00:45:59.326488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-07 00:45:59.326492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-07 00:45:59.326495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-07 00:45:59.326499 | orchestrator | 2026-01-07 00:45:59.326503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:59.326509 | orchestrator | Wednesday 07 January 2026 00:45:54 +0000 (0:00:00.427) 0:00:50.777 ***** 2026-01-07 00:45:59.326515 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:45:59.326521 | orchestrator | 2026-01-07 00:45:59.326527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:59.326533 | orchestrator | Wednesday 07 January 2026 00:45:54 +0000 (0:00:00.211) 0:00:50.988 ***** 2026-01-07 00:45:59.326539 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:45:59.326546 | orchestrator | 2026-01-07 00:45:59.326550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:59.326566 | orchestrator | Wednesday 07 January 2026 00:45:55 +0000 (0:00:00.215) 0:00:51.204 ***** 2026-01-07 00:45:59.326570 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:45:59.326574 | orchestrator | 2026-01-07 00:45:59.326579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:59.326583 | orchestrator | Wednesday 07 January 2026 00:45:55 +0000 (0:00:00.196) 0:00:51.401 ***** 2026-01-07 00:45:59.326587 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:45:59.326592 | orchestrator | 2026-01-07 00:45:59.326596 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:59.326600 | orchestrator | Wednesday 07 January 2026 00:45:55 +0000 (0:00:00.196) 0:00:51.597 ***** 2026-01-07 00:45:59.326604 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:45:59.326609 | orchestrator | 2026-01-07 00:45:59.326613 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:59.326617 | orchestrator | Wednesday 07 January 2026 00:45:56 +0000 (0:00:00.748) 0:00:52.345 ***** 2026-01-07 00:45:59.326622 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:45:59.326626 | orchestrator | 2026-01-07 00:45:59.326631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:59.326635 | orchestrator | Wednesday 07 January 2026 00:45:56 +0000 (0:00:00.187) 0:00:52.532 ***** 2026-01-07 00:45:59.326640 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:45:59.326644 | orchestrator | 2026-01-07 00:45:59.326648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:59.326652 | orchestrator | Wednesday 07 January 2026 00:45:56 +0000 (0:00:00.206) 0:00:52.739 ***** 2026-01-07 00:45:59.326657 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:45:59.326661 | orchestrator | 2026-01-07 00:45:59.326666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:59.326670 | orchestrator | Wednesday 07 January 2026 00:45:56 +0000 (0:00:00.241) 0:00:52.981 ***** 2026-01-07 00:45:59.326675 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015) 2026-01-07 00:45:59.326681 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015) 2026-01-07 00:45:59.326685 | orchestrator | 2026-01-07 00:45:59.326690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:59.326694 | orchestrator | Wednesday 07 January 2026 00:45:57 +0000 (0:00:00.450) 0:00:53.432 ***** 2026-01-07 00:45:59.326698 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6d387afb-e7b9-4a62-89e6-97c0cffa548c) 2026-01-07 00:45:59.326703 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6d387afb-e7b9-4a62-89e6-97c0cffa548c) 2026-01-07 00:45:59.326707 | orchestrator | 2026-01-07 00:45:59.326711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:59.326732 | orchestrator | Wednesday 07 January 2026 00:45:57 +0000 (0:00:00.429) 0:00:53.861 ***** 2026-01-07 00:45:59.326737 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_995dcd08-654d-4bc0-ab24-70981ba073f5) 2026-01-07 00:45:59.326741 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_995dcd08-654d-4bc0-ab24-70981ba073f5) 2026-01-07 00:45:59.326745 | orchestrator | 2026-01-07 00:45:59.326750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:59.326754 | orchestrator | Wednesday 07 January 2026 00:45:58 +0000 (0:00:00.412) 0:00:54.274 ***** 2026-01-07 00:45:59.326759 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_82b3532f-8ed6-4997-a6d4-62047998b4b8) 2026-01-07 00:45:59.326763 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_82b3532f-8ed6-4997-a6d4-62047998b4b8) 2026-01-07 00:45:59.326767 | orchestrator | 2026-01-07 00:45:59.326771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-07 00:45:59.326776 | orchestrator | Wednesday 07 January 2026 00:45:58 +0000 (0:00:00.436) 0:00:54.711 ***** 2026-01-07 00:45:59.326781 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-07 00:45:59.326785 | orchestrator | 2026-01-07 00:45:59.326789 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:45:59.326794 | orchestrator | Wednesday 07 January 2026 00:45:58 +0000 (0:00:00.330) 0:00:55.041 ***** 2026-01-07 00:45:59.326799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-07 00:45:59.326803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-07 00:45:59.326808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-07 00:45:59.326812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-07 00:45:59.326816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-07 00:45:59.326819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-07 00:45:59.326823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-07 00:45:59.326827 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-07 00:45:59.326830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-07 00:45:59.326834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-07 00:45:59.326838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-07 00:45:59.326845 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-07 00:46:08.239650 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-07 00:46:08.239743 | orchestrator | 2026-01-07 00:46:08.239757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:08.239768 | orchestrator | Wednesday 07 January 2026 00:45:59 +0000 (0:00:00.413) 0:00:55.454 ***** 2026-01-07 00:46:08.239778 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.239788 | orchestrator | 2026-01-07 00:46:08.239798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:08.239807 | orchestrator | Wednesday 07 January 2026 00:45:59 +0000 (0:00:00.206) 0:00:55.660 ***** 2026-01-07 00:46:08.239817 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.239826 | orchestrator | 2026-01-07 00:46:08.239835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:08.239844 | orchestrator | Wednesday 07 January 2026 00:46:00 +0000 (0:00:00.585) 0:00:56.245 ***** 2026-01-07 00:46:08.239853 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.239882 | orchestrator | 2026-01-07 00:46:08.239893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:08.239902 | orchestrator | Wednesday 07 January 2026 00:46:00 +0000 (0:00:00.230) 0:00:56.476 ***** 2026-01-07 00:46:08.239911 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.239920 | orchestrator | 2026-01-07 00:46:08.239929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:08.239938 | orchestrator | Wednesday 07 January 2026 00:46:00 +0000 (0:00:00.213) 0:00:56.689 ***** 2026-01-07 00:46:08.239947 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.239957 | orchestrator | 2026-01-07 00:46:08.239966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:08.239975 | orchestrator | Wednesday 07 January 2026 00:46:00 +0000 (0:00:00.197) 0:00:56.887 ***** 2026-01-07 00:46:08.239984 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.239993 | orchestrator | 2026-01-07 00:46:08.240002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:08.240011 | orchestrator | Wednesday 07 January 2026 00:46:00 +0000 (0:00:00.243) 0:00:57.130 ***** 2026-01-07 00:46:08.240020 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.240029 | orchestrator | 2026-01-07 00:46:08.240038 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:08.240123 | orchestrator | Wednesday 07 January 2026 00:46:01 +0000 (0:00:00.211) 0:00:57.342 ***** 2026-01-07 00:46:08.240133 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.240143 | orchestrator | 2026-01-07 00:46:08.240152 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:08.240162 | orchestrator | Wednesday 07 January 2026 00:46:01 +0000 (0:00:00.184) 0:00:57.527 ***** 2026-01-07 00:46:08.240171 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-07 00:46:08.240181 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-07 00:46:08.240191 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-07 00:46:08.240200 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-07 00:46:08.240210 | orchestrator | 2026-01-07 00:46:08.240219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:08.240228 | orchestrator | Wednesday 07 January 2026 00:46:02 +0000 (0:00:00.641) 0:00:58.168 ***** 2026-01-07 00:46:08.240238 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.240247 | orchestrator | 2026-01-07 00:46:08.240256 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:08.240266 | orchestrator | Wednesday 07 January 2026 00:46:02 +0000 (0:00:00.204) 0:00:58.373 ***** 2026-01-07 00:46:08.240277 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.240288 | orchestrator | 2026-01-07 00:46:08.240297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:08.240307 | orchestrator | Wednesday 07 January 2026 00:46:02 +0000 (0:00:00.222) 0:00:58.595 ***** 2026-01-07 00:46:08.240317 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.240327 | orchestrator | 2026-01-07 00:46:08.240336 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-07 00:46:08.240345 | orchestrator | Wednesday 07 January 2026 00:46:02 +0000 (0:00:00.197) 0:00:58.793 ***** 2026-01-07 00:46:08.240354 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.240364 | orchestrator | 2026-01-07 00:46:08.240373 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-07 00:46:08.240383 | orchestrator | Wednesday 07 January 2026 00:46:02 +0000 (0:00:00.198) 0:00:58.991 ***** 2026-01-07 00:46:08.240392 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.240402 | orchestrator | 2026-01-07 00:46:08.240411 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-07 00:46:08.240420 | orchestrator | Wednesday 07 January 2026 00:46:03 +0000 (0:00:00.309) 0:00:59.301 ***** 2026-01-07 00:46:08.240430 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dee3f89e-6ecc-57ac-a128-7ff5a8885640'}}) 2026-01-07 00:46:08.240449 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1079410-ca98-5ed2-be64-415d52b0d3f8'}}) 2026-01-07 00:46:08.240459 | orchestrator | 2026-01-07 00:46:08.240469 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-07 00:46:08.240478 | orchestrator | Wednesday 07 January 2026 00:46:03 +0000 (0:00:00.194) 0:00:59.496 ***** 2026-01-07 00:46:08.240489 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'}) 2026-01-07 00:46:08.240515 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'}) 2026-01-07 00:46:08.240525 | orchestrator | 2026-01-07 00:46:08.240535 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-07 00:46:08.240560 | orchestrator | Wednesday 07 January 2026 00:46:05 +0000 (0:00:01.827) 0:01:01.324 ***** 2026-01-07 00:46:08.240571 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:08.240582 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:08.240592 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.240601 | orchestrator | 2026-01-07 00:46:08.240611 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-07 00:46:08.240620 | orchestrator | Wednesday 07 January 2026 00:46:05 +0000 (0:00:00.172) 0:01:01.496 ***** 2026-01-07 00:46:08.240630 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'}) 2026-01-07 00:46:08.240639 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'}) 2026-01-07 00:46:08.240649 | orchestrator | 2026-01-07 00:46:08.240658 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-07 00:46:08.240667 | orchestrator | Wednesday 07 January 2026 00:46:06 +0000 (0:00:01.225) 0:01:02.722 ***** 2026-01-07 00:46:08.240677 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:08.240686 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:08.240695 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.240704 | orchestrator | 2026-01-07 00:46:08.240713 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-07 00:46:08.240722 | orchestrator | Wednesday 07 January 2026 00:46:06 +0000 (0:00:00.150) 0:01:02.872 ***** 2026-01-07 00:46:08.240732 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.240741 | orchestrator | 2026-01-07 00:46:08.240750 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-07 00:46:08.240759 | orchestrator | Wednesday 07 January 2026 00:46:06 +0000 (0:00:00.144) 0:01:03.016 ***** 2026-01-07 00:46:08.240769 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:08.240782 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:08.240791 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.240800 | orchestrator | 2026-01-07 00:46:08.240810 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-07 00:46:08.240830 | orchestrator | Wednesday 07 January 2026 00:46:07 +0000 (0:00:00.195) 0:01:03.211 ***** 2026-01-07 00:46:08.240854 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.240863 | orchestrator | 2026-01-07 00:46:08.240873 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-07 00:46:08.240882 | orchestrator | Wednesday 07 January 2026 00:46:07 +0000 (0:00:00.143) 0:01:03.354 ***** 2026-01-07 00:46:08.240891 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:08.240899 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:08.240908 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.240917 | orchestrator | 2026-01-07 00:46:08.240925 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-07 00:46:08.240934 | orchestrator | Wednesday 07 January 2026 00:46:07 +0000 (0:00:00.160) 0:01:03.515 ***** 2026-01-07 00:46:08.240944 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.240952 | orchestrator | 2026-01-07 00:46:08.240962 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-07 00:46:08.240971 | orchestrator | Wednesday 07 January 2026 00:46:07 +0000 (0:00:00.153) 0:01:03.668 ***** 2026-01-07 00:46:08.240981 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:08.240990 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:08.240999 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:08.241008 | orchestrator | 2026-01-07 00:46:08.241017 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-07 00:46:08.241026 | orchestrator | Wednesday 07 January 2026 00:46:07 +0000 (0:00:00.171) 0:01:03.839 ***** 2026-01-07 00:46:08.241035 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:46:08.241057 | orchestrator | 2026-01-07 00:46:08.241067 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-07 00:46:08.241076 | orchestrator | Wednesday 07 January 2026 00:46:08 +0000 (0:00:00.374) 0:01:04.214 ***** 2026-01-07 00:46:08.241092 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:14.818378 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:14.818463 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.818474 | orchestrator | 2026-01-07 00:46:14.818483 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-07 00:46:14.818491 | orchestrator | Wednesday 07 January 2026 00:46:08 +0000 (0:00:00.161) 0:01:04.376 ***** 2026-01-07 00:46:14.818498 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:14.818505 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:14.818512 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.818518 | orchestrator | 2026-01-07 00:46:14.818525 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-07 00:46:14.818532 | orchestrator | Wednesday 07 January 2026 00:46:08 +0000 (0:00:00.172) 0:01:04.548 ***** 2026-01-07 00:46:14.818539 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:14.818545 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:14.818570 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.818578 | orchestrator | 2026-01-07 00:46:14.818585 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-07 00:46:14.818592 | orchestrator | Wednesday 07 January 2026 00:46:08 +0000 (0:00:00.180) 0:01:04.729 ***** 2026-01-07 00:46:14.818598 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.818604 | orchestrator | 2026-01-07 00:46:14.818611 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-07 00:46:14.818618 | orchestrator | Wednesday 07 January 2026 00:46:08 +0000 (0:00:00.145) 0:01:04.874 ***** 2026-01-07 00:46:14.818624 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.818630 | orchestrator | 2026-01-07 00:46:14.818636 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-07 00:46:14.818643 | orchestrator | Wednesday 07 January 2026 00:46:08 +0000 (0:00:00.145) 0:01:05.020 ***** 2026-01-07 00:46:14.818649 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.818655 | orchestrator | 2026-01-07 00:46:14.818675 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-07 00:46:14.818682 | orchestrator | Wednesday 07 January 2026 00:46:09 +0000 (0:00:00.161) 0:01:05.181 ***** 2026-01-07 00:46:14.818689 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:46:14.818696 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-07 00:46:14.818704 | orchestrator | } 2026-01-07 00:46:14.818711 | orchestrator | 2026-01-07 00:46:14.818717 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-07 00:46:14.818723 | orchestrator | Wednesday 07 January 2026 00:46:09 +0000 (0:00:00.165) 0:01:05.347 ***** 2026-01-07 00:46:14.818730 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:46:14.818736 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-07 00:46:14.818743 | orchestrator | } 2026-01-07 00:46:14.818749 | orchestrator | 2026-01-07 00:46:14.818756 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-07 00:46:14.818762 | orchestrator | Wednesday 07 January 2026 00:46:09 +0000 (0:00:00.127) 0:01:05.475 ***** 2026-01-07 00:46:14.818768 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:46:14.818775 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-07 00:46:14.818781 | orchestrator | } 2026-01-07 00:46:14.818788 | orchestrator | 2026-01-07 00:46:14.818795 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-07 00:46:14.818802 | orchestrator | Wednesday 07 January 2026 00:46:09 +0000 (0:00:00.136) 0:01:05.612 ***** 2026-01-07 00:46:14.818808 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:46:14.818815 | orchestrator | 2026-01-07 00:46:14.818822 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-07 00:46:14.818828 | orchestrator | Wednesday 07 January 2026 00:46:10 +0000 (0:00:00.568) 0:01:06.180 ***** 2026-01-07 00:46:14.818835 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:46:14.818841 | orchestrator | 2026-01-07 00:46:14.818848 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-07 00:46:14.818854 | orchestrator | Wednesday 07 January 2026 00:46:10 +0000 (0:00:00.574) 0:01:06.754 ***** 2026-01-07 00:46:14.818860 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:46:14.818866 | orchestrator | 2026-01-07 00:46:14.818873 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-07 00:46:14.818879 | orchestrator | Wednesday 07 January 2026 00:46:11 +0000 (0:00:00.820) 0:01:07.575 ***** 2026-01-07 00:46:14.818886 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:46:14.818892 | orchestrator | 2026-01-07 00:46:14.818898 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-07 00:46:14.818904 | orchestrator | Wednesday 07 January 2026 00:46:11 +0000 (0:00:00.162) 0:01:07.738 ***** 2026-01-07 00:46:14.818911 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.818917 | orchestrator | 2026-01-07 00:46:14.818923 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-07 00:46:14.818936 | orchestrator | Wednesday 07 January 2026 00:46:11 +0000 (0:00:00.122) 0:01:07.861 ***** 2026-01-07 00:46:14.818943 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.818950 | orchestrator | 2026-01-07 00:46:14.818958 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-07 00:46:14.818964 | orchestrator | Wednesday 07 January 2026 00:46:11 +0000 (0:00:00.129) 0:01:07.990 ***** 2026-01-07 00:46:14.818970 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:46:14.818977 | orchestrator |  "vgs_report": { 2026-01-07 00:46:14.818985 | orchestrator |  "vg": [] 2026-01-07 00:46:14.819006 | orchestrator |  } 2026-01-07 00:46:14.819013 | orchestrator | } 2026-01-07 00:46:14.819020 | orchestrator | 2026-01-07 00:46:14.819026 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-07 00:46:14.819033 | orchestrator | Wednesday 07 January 2026 00:46:12 +0000 (0:00:00.149) 0:01:08.139 ***** 2026-01-07 00:46:14.819055 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.819062 | orchestrator | 2026-01-07 00:46:14.819069 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-07 00:46:14.819076 | orchestrator | Wednesday 07 January 2026 00:46:12 +0000 (0:00:00.164) 0:01:08.304 ***** 2026-01-07 00:46:14.819083 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.819090 | orchestrator | 2026-01-07 00:46:14.819097 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-07 00:46:14.819103 | orchestrator | Wednesday 07 January 2026 00:46:12 +0000 (0:00:00.161) 0:01:08.465 ***** 2026-01-07 00:46:14.819107 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.819111 | orchestrator | 2026-01-07 00:46:14.819116 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-07 00:46:14.819120 | orchestrator | Wednesday 07 January 2026 00:46:12 +0000 (0:00:00.137) 0:01:08.603 ***** 2026-01-07 00:46:14.819124 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.819129 | orchestrator | 2026-01-07 00:46:14.819133 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-07 00:46:14.819137 | orchestrator | Wednesday 07 January 2026 00:46:12 +0000 (0:00:00.200) 0:01:08.804 ***** 2026-01-07 00:46:14.819141 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.819145 | orchestrator | 2026-01-07 00:46:14.819150 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-07 00:46:14.819154 | orchestrator | Wednesday 07 January 2026 00:46:12 +0000 (0:00:00.166) 0:01:08.971 ***** 2026-01-07 00:46:14.819158 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.819162 | orchestrator | 2026-01-07 00:46:14.819167 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-07 00:46:14.819171 | orchestrator | Wednesday 07 January 2026 00:46:12 +0000 (0:00:00.166) 0:01:09.137 ***** 2026-01-07 00:46:14.819175 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.819179 | orchestrator | 2026-01-07 00:46:14.819184 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-07 00:46:14.819188 | orchestrator | Wednesday 07 January 2026 00:46:13 +0000 (0:00:00.147) 0:01:09.285 ***** 2026-01-07 00:46:14.819192 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.819196 | orchestrator | 2026-01-07 00:46:14.819200 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-07 00:46:14.819205 | orchestrator | Wednesday 07 January 2026 00:46:13 +0000 (0:00:00.357) 0:01:09.642 ***** 2026-01-07 00:46:14.819209 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.819213 | orchestrator | 2026-01-07 00:46:14.819221 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-07 00:46:14.819226 | orchestrator | Wednesday 07 January 2026 00:46:13 +0000 (0:00:00.130) 0:01:09.773 ***** 2026-01-07 00:46:14.819230 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.819234 | orchestrator | 2026-01-07 00:46:14.819239 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-07 00:46:14.819243 | orchestrator | Wednesday 07 January 2026 00:46:13 +0000 (0:00:00.141) 0:01:09.914 ***** 2026-01-07 00:46:14.819252 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.819256 | orchestrator | 2026-01-07 00:46:14.819260 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-07 00:46:14.819265 | orchestrator | Wednesday 07 January 2026 00:46:13 +0000 (0:00:00.149) 0:01:10.064 ***** 2026-01-07 00:46:14.819269 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.819273 | orchestrator | 2026-01-07 00:46:14.819280 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-07 00:46:14.819286 | orchestrator | Wednesday 07 January 2026 00:46:14 +0000 (0:00:00.151) 0:01:10.215 ***** 2026-01-07 00:46:14.819293 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.819299 | orchestrator | 2026-01-07 00:46:14.819306 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-07 00:46:14.819312 | orchestrator | Wednesday 07 January 2026 00:46:14 +0000 (0:00:00.155) 0:01:10.370 ***** 2026-01-07 00:46:14.819319 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.819326 | orchestrator | 2026-01-07 00:46:14.819332 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-07 00:46:14.819338 | orchestrator | Wednesday 07 January 2026 00:46:14 +0000 (0:00:00.117) 0:01:10.488 ***** 2026-01-07 00:46:14.819346 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:14.819353 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:14.819360 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.819366 | orchestrator | 2026-01-07 00:46:14.819372 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-07 00:46:14.819379 | orchestrator | Wednesday 07 January 2026 00:46:14 +0000 (0:00:00.167) 0:01:10.656 ***** 2026-01-07 00:46:14.819385 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:14.819392 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:14.819398 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:14.819405 | orchestrator | 2026-01-07 00:46:14.819411 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-07 00:46:14.819417 | orchestrator | Wednesday 07 January 2026 00:46:14 +0000 (0:00:00.139) 0:01:10.796 ***** 2026-01-07 00:46:14.819430 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:17.773103 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:17.773152 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:17.773159 | orchestrator | 2026-01-07 00:46:17.773163 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-07 00:46:17.773169 | orchestrator | Wednesday 07 January 2026 00:46:14 +0000 (0:00:00.157) 0:01:10.953 ***** 2026-01-07 00:46:17.773173 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:17.773177 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:17.773180 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:17.773184 | orchestrator | 2026-01-07 00:46:17.773188 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-07 00:46:17.773191 | orchestrator | Wednesday 07 January 2026 00:46:14 +0000 (0:00:00.155) 0:01:11.109 ***** 2026-01-07 00:46:17.773207 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:17.773211 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:17.773215 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:17.773218 | orchestrator | 2026-01-07 00:46:17.773222 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-07 00:46:17.773226 | orchestrator | Wednesday 07 January 2026 00:46:15 +0000 (0:00:00.156) 0:01:11.265 ***** 2026-01-07 00:46:17.773230 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:17.773233 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:17.773237 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:17.773241 | orchestrator | 2026-01-07 00:46:17.773245 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-07 00:46:17.773248 | orchestrator | Wednesday 07 January 2026 00:46:15 +0000 (0:00:00.320) 0:01:11.585 ***** 2026-01-07 00:46:17.773252 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:17.773256 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:17.773260 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:17.773264 | orchestrator | 2026-01-07 00:46:17.773268 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-07 00:46:17.773271 | orchestrator | Wednesday 07 January 2026 00:46:15 +0000 (0:00:00.169) 0:01:11.754 ***** 2026-01-07 00:46:17.773275 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:17.773279 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:17.773283 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:17.773286 | orchestrator | 2026-01-07 00:46:17.773290 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-07 00:46:17.773294 | orchestrator | Wednesday 07 January 2026 00:46:15 +0000 (0:00:00.163) 0:01:11.918 ***** 2026-01-07 00:46:17.773297 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:46:17.773302 | orchestrator | 2026-01-07 00:46:17.773306 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-07 00:46:17.773309 | orchestrator | Wednesday 07 January 2026 00:46:16 +0000 (0:00:00.481) 0:01:12.399 ***** 2026-01-07 00:46:17.773313 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:46:17.773317 | orchestrator | 2026-01-07 00:46:17.773321 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-07 00:46:17.773324 | orchestrator | Wednesday 07 January 2026 00:46:16 +0000 (0:00:00.475) 0:01:12.874 ***** 2026-01-07 00:46:17.773328 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:46:17.773332 | orchestrator | 2026-01-07 00:46:17.773335 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-07 00:46:17.773339 | orchestrator | Wednesday 07 January 2026 00:46:16 +0000 (0:00:00.152) 0:01:13.027 ***** 2026-01-07 00:46:17.773343 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'vg_name': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'}) 2026-01-07 00:46:17.773347 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'vg_name': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'}) 2026-01-07 00:46:17.773354 | orchestrator | 2026-01-07 00:46:17.773358 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-07 00:46:17.773362 | orchestrator | Wednesday 07 January 2026 00:46:17 +0000 (0:00:00.194) 0:01:13.221 ***** 2026-01-07 00:46:17.773381 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:17.773385 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:17.773389 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:17.773393 | orchestrator | 2026-01-07 00:46:17.773397 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-07 00:46:17.773401 | orchestrator | Wednesday 07 January 2026 00:46:17 +0000 (0:00:00.165) 0:01:13.387 ***** 2026-01-07 00:46:17.773405 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:17.773408 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:17.773412 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:17.773416 | orchestrator | 2026-01-07 00:46:17.773420 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-07 00:46:17.773423 | orchestrator | Wednesday 07 January 2026 00:46:17 +0000 (0:00:00.156) 0:01:13.544 ***** 2026-01-07 00:46:17.773427 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'})  2026-01-07 00:46:17.773431 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'})  2026-01-07 00:46:17.773435 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:17.773438 | orchestrator | 2026-01-07 00:46:17.773442 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-07 00:46:17.773446 | orchestrator | Wednesday 07 January 2026 00:46:17 +0000 (0:00:00.170) 0:01:13.715 ***** 2026-01-07 00:46:17.773449 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 00:46:17.773453 | orchestrator |  "lvm_report": { 2026-01-07 00:46:17.773457 | orchestrator |  "lv": [ 2026-01-07 00:46:17.773461 | orchestrator |  { 2026-01-07 00:46:17.773465 | orchestrator |  "lv_name": "osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8", 2026-01-07 00:46:17.773471 | orchestrator |  "vg_name": "ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8" 2026-01-07 00:46:17.773475 | orchestrator |  }, 2026-01-07 00:46:17.773479 | orchestrator |  { 2026-01-07 00:46:17.773482 | orchestrator |  "lv_name": "osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640", 2026-01-07 00:46:17.773486 | orchestrator |  "vg_name": "ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640" 2026-01-07 00:46:17.773490 | orchestrator |  } 2026-01-07 00:46:17.773493 | orchestrator |  ], 2026-01-07 00:46:17.773497 | orchestrator |  "pv": [ 2026-01-07 00:46:17.773501 | orchestrator |  { 2026-01-07 00:46:17.773505 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-07 00:46:17.773508 | orchestrator |  "vg_name": "ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640" 2026-01-07 00:46:17.773512 | orchestrator |  }, 2026-01-07 00:46:17.773516 | orchestrator |  { 2026-01-07 00:46:17.773519 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-07 00:46:17.773523 | orchestrator |  "vg_name": "ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8" 2026-01-07 00:46:17.773527 | orchestrator |  } 2026-01-07 00:46:17.773530 | orchestrator |  ] 2026-01-07 00:46:17.773534 | orchestrator |  } 2026-01-07 00:46:17.773538 | orchestrator | } 2026-01-07 00:46:17.773545 | orchestrator | 2026-01-07 00:46:17.773549 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:46:17.773552 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-07 00:46:17.773556 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-07 00:46:17.773560 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-07 00:46:17.773564 | orchestrator | 2026-01-07 00:46:17.773567 | orchestrator | 2026-01-07 00:46:17.773571 | orchestrator | 2026-01-07 00:46:17.773575 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:46:17.773579 | orchestrator | Wednesday 07 January 2026 00:46:17 +0000 (0:00:00.168) 0:01:13.883 ***** 2026-01-07 00:46:17.773582 | orchestrator | =============================================================================== 2026-01-07 00:46:17.773586 | orchestrator | Create block VGs -------------------------------------------------------- 5.54s 2026-01-07 00:46:17.773590 | orchestrator | Create block LVs -------------------------------------------------------- 3.95s 2026-01-07 00:46:17.773593 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 2.03s 2026-01-07 00:46:17.773597 | orchestrator | Add known partitions to the list of available block devices ------------- 1.99s 2026-01-07 00:46:17.773601 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.83s 2026-01-07 00:46:17.773604 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.68s 2026-01-07 00:46:17.773608 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.47s 2026-01-07 00:46:17.773612 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.43s 2026-01-07 00:46:17.773618 | orchestrator | Add known links to the list of available block devices ------------------ 1.29s 2026-01-07 00:46:18.132461 | orchestrator | Print LVM report data --------------------------------------------------- 0.96s 2026-01-07 00:46:18.132521 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.96s 2026-01-07 00:46:18.132527 | orchestrator | Add known partitions to the list of available block devices ------------- 0.95s 2026-01-07 00:46:18.132530 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2026-01-07 00:46:18.132534 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.77s 2026-01-07 00:46:18.132537 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-01-07 00:46:18.132541 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.74s 2026-01-07 00:46:18.132544 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2026-01-07 00:46:18.132547 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.73s 2026-01-07 00:46:18.132550 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-01-07 00:46:18.132553 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-01-07 00:46:30.638005 | orchestrator | 2026-01-07 00:46:30 | INFO  | Task 701c3a8c-5d16-41b6-816a-b6d07285123e (facts) was prepared for execution. 2026-01-07 00:46:30.638299 | orchestrator | 2026-01-07 00:46:30 | INFO  | It takes a moment until task 701c3a8c-5d16-41b6-816a-b6d07285123e (facts) has been started and output is visible here. 2026-01-07 00:46:43.751699 | orchestrator | 2026-01-07 00:46:43.751836 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-07 00:46:43.751857 | orchestrator | 2026-01-07 00:46:43.751870 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-07 00:46:43.751882 | orchestrator | Wednesday 07 January 2026 00:46:34 +0000 (0:00:00.251) 0:00:00.251 ***** 2026-01-07 00:46:43.751928 | orchestrator | ok: [testbed-manager] 2026-01-07 00:46:43.751941 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:46:43.751952 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:46:43.751963 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:46:43.751974 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:46:43.751985 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:46:43.751996 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:46:43.752098 | orchestrator | 2026-01-07 00:46:43.752113 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-07 00:46:43.752146 | orchestrator | Wednesday 07 January 2026 00:46:35 +0000 (0:00:01.125) 0:00:01.377 ***** 2026-01-07 00:46:43.752161 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:46:43.752173 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:46:43.752184 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:46:43.752195 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:46:43.752206 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:43.752217 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:43.752228 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:43.752241 | orchestrator | 2026-01-07 00:46:43.752253 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-07 00:46:43.752265 | orchestrator | 2026-01-07 00:46:43.752278 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-07 00:46:43.752291 | orchestrator | Wednesday 07 January 2026 00:46:37 +0000 (0:00:01.209) 0:00:02.586 ***** 2026-01-07 00:46:43.752304 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:46:43.752316 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:46:43.752329 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:46:43.752342 | orchestrator | ok: [testbed-manager] 2026-01-07 00:46:43.752355 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:46:43.752368 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:46:43.752381 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:46:43.752393 | orchestrator | 2026-01-07 00:46:43.752406 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-07 00:46:43.752419 | orchestrator | 2026-01-07 00:46:43.752432 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-07 00:46:43.752445 | orchestrator | Wednesday 07 January 2026 00:46:42 +0000 (0:00:05.729) 0:00:08.316 ***** 2026-01-07 00:46:43.752458 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:46:43.752470 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:46:43.752482 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:46:43.752495 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:46:43.752508 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:46:43.752520 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:46:43.752532 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:46:43.752544 | orchestrator | 2026-01-07 00:46:43.752557 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:46:43.752570 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:46:43.752584 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:46:43.752597 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:46:43.752610 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:46:43.752621 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:46:43.752632 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:46:43.752643 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:46:43.752663 | orchestrator | 2026-01-07 00:46:43.752674 | orchestrator | 2026-01-07 00:46:43.752685 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:46:43.752696 | orchestrator | Wednesday 07 January 2026 00:46:43 +0000 (0:00:00.514) 0:00:08.831 ***** 2026-01-07 00:46:43.752707 | orchestrator | =============================================================================== 2026-01-07 00:46:43.752718 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.73s 2026-01-07 00:46:43.752729 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.21s 2026-01-07 00:46:43.752739 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2026-01-07 00:46:43.752750 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-01-07 00:46:55.999431 | orchestrator | 2026-01-07 00:46:55 | INFO  | Task f3134d7d-bf8a-4584-8844-42734fa77406 (frr) was prepared for execution. 2026-01-07 00:46:55.999547 | orchestrator | 2026-01-07 00:46:55 | INFO  | It takes a moment until task f3134d7d-bf8a-4584-8844-42734fa77406 (frr) has been started and output is visible here. 2026-01-07 00:47:21.002404 | orchestrator | 2026-01-07 00:47:21.002524 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-07 00:47:21.002542 | orchestrator | 2026-01-07 00:47:21.002555 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-07 00:47:21.002567 | orchestrator | Wednesday 07 January 2026 00:47:00 +0000 (0:00:00.248) 0:00:00.248 ***** 2026-01-07 00:47:21.002579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:47:21.002592 | orchestrator | 2026-01-07 00:47:21.002603 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-07 00:47:21.002614 | orchestrator | Wednesday 07 January 2026 00:47:00 +0000 (0:00:00.206) 0:00:00.455 ***** 2026-01-07 00:47:21.002625 | orchestrator | changed: [testbed-manager] 2026-01-07 00:47:21.002637 | orchestrator | 2026-01-07 00:47:21.002648 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-07 00:47:21.002659 | orchestrator | Wednesday 07 January 2026 00:47:01 +0000 (0:00:01.139) 0:00:01.595 ***** 2026-01-07 00:47:21.002686 | orchestrator | changed: [testbed-manager] 2026-01-07 00:47:21.002698 | orchestrator | 2026-01-07 00:47:21.002709 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-07 00:47:21.002720 | orchestrator | Wednesday 07 January 2026 00:47:11 +0000 (0:00:09.633) 0:00:11.228 ***** 2026-01-07 00:47:21.002731 | orchestrator | ok: [testbed-manager] 2026-01-07 00:47:21.002742 | orchestrator | 2026-01-07 00:47:21.002753 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-07 00:47:21.002764 | orchestrator | Wednesday 07 January 2026 00:47:12 +0000 (0:00:01.036) 0:00:12.265 ***** 2026-01-07 00:47:21.002775 | orchestrator | changed: [testbed-manager] 2026-01-07 00:47:21.002786 | orchestrator | 2026-01-07 00:47:21.002797 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-07 00:47:21.002808 | orchestrator | Wednesday 07 January 2026 00:47:13 +0000 (0:00:00.954) 0:00:13.219 ***** 2026-01-07 00:47:21.002818 | orchestrator | ok: [testbed-manager] 2026-01-07 00:47:21.002829 | orchestrator | 2026-01-07 00:47:21.002841 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-07 00:47:21.002852 | orchestrator | Wednesday 07 January 2026 00:47:14 +0000 (0:00:01.146) 0:00:14.366 ***** 2026-01-07 00:47:21.002863 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:47:21.002874 | orchestrator | 2026-01-07 00:47:21.002885 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-07 00:47:21.002896 | orchestrator | Wednesday 07 January 2026 00:47:14 +0000 (0:00:00.155) 0:00:14.522 ***** 2026-01-07 00:47:21.002907 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:47:21.002942 | orchestrator | 2026-01-07 00:47:21.002956 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-07 00:47:21.003035 | orchestrator | Wednesday 07 January 2026 00:47:14 +0000 (0:00:00.146) 0:00:14.668 ***** 2026-01-07 00:47:21.003049 | orchestrator | changed: [testbed-manager] 2026-01-07 00:47:21.003062 | orchestrator | 2026-01-07 00:47:21.003075 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-07 00:47:21.003088 | orchestrator | Wednesday 07 January 2026 00:47:15 +0000 (0:00:00.962) 0:00:15.631 ***** 2026-01-07 00:47:21.003101 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-07 00:47:21.003113 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-07 00:47:21.003127 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-07 00:47:21.003140 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-07 00:47:21.003153 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-07 00:47:21.003165 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-07 00:47:21.003178 | orchestrator | 2026-01-07 00:47:21.003191 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-07 00:47:21.003204 | orchestrator | Wednesday 07 January 2026 00:47:17 +0000 (0:00:02.230) 0:00:17.862 ***** 2026-01-07 00:47:21.003217 | orchestrator | ok: [testbed-manager] 2026-01-07 00:47:21.003230 | orchestrator | 2026-01-07 00:47:21.003243 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-07 00:47:21.003257 | orchestrator | Wednesday 07 January 2026 00:47:19 +0000 (0:00:01.557) 0:00:19.419 ***** 2026-01-07 00:47:21.003270 | orchestrator | changed: [testbed-manager] 2026-01-07 00:47:21.003282 | orchestrator | 2026-01-07 00:47:21.003294 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:47:21.003305 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:47:21.003317 | orchestrator | 2026-01-07 00:47:21.003327 | orchestrator | 2026-01-07 00:47:21.003338 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:47:21.003349 | orchestrator | Wednesday 07 January 2026 00:47:20 +0000 (0:00:01.371) 0:00:20.790 ***** 2026-01-07 00:47:21.003360 | orchestrator | =============================================================================== 2026-01-07 00:47:21.003371 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.63s 2026-01-07 00:47:21.003381 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.23s 2026-01-07 00:47:21.003392 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.56s 2026-01-07 00:47:21.003403 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.37s 2026-01-07 00:47:21.003413 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.15s 2026-01-07 00:47:21.003448 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.14s 2026-01-07 00:47:21.003469 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.04s 2026-01-07 00:47:21.003482 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.96s 2026-01-07 00:47:21.003492 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.95s 2026-01-07 00:47:21.003503 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.21s 2026-01-07 00:47:21.003514 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.16s 2026-01-07 00:47:21.003525 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-01-07 00:47:21.274581 | orchestrator | 2026-01-07 00:47:21.278842 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Jan 7 00:47:21 UTC 2026 2026-01-07 00:47:21.278921 | orchestrator | 2026-01-07 00:47:23.208666 | orchestrator | 2026-01-07 00:47:23 | INFO  | Collection nutshell is prepared for execution 2026-01-07 00:47:23.208782 | orchestrator | 2026-01-07 00:47:23 | INFO  | A [0] - dotfiles 2026-01-07 00:47:33.267713 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [0] - homer 2026-01-07 00:47:33.267814 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [0] - netdata 2026-01-07 00:47:33.267825 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [0] - openstackclient 2026-01-07 00:47:33.267833 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [0] - phpmyadmin 2026-01-07 00:47:33.267841 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [0] - common 2026-01-07 00:47:33.270369 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [1] -- loadbalancer 2026-01-07 00:47:33.270411 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [2] --- opensearch 2026-01-07 00:47:33.270419 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [2] --- mariadb-ng 2026-01-07 00:47:33.271251 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [3] ---- horizon 2026-01-07 00:47:33.271348 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [3] ---- keystone 2026-01-07 00:47:33.271359 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [4] ----- neutron 2026-01-07 00:47:33.271366 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [5] ------ wait-for-nova 2026-01-07 00:47:33.271379 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [6] ------- octavia 2026-01-07 00:47:33.272986 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [4] ----- barbican 2026-01-07 00:47:33.273181 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [4] ----- designate 2026-01-07 00:47:33.273196 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [4] ----- ironic 2026-01-07 00:47:33.273372 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [4] ----- placement 2026-01-07 00:47:33.273385 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [4] ----- magnum 2026-01-07 00:47:33.274117 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [1] -- openvswitch 2026-01-07 00:47:33.274136 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [2] --- ovn 2026-01-07 00:47:33.274769 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [1] -- memcached 2026-01-07 00:47:33.274829 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [1] -- redis 2026-01-07 00:47:33.274951 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [1] -- rabbitmq-ng 2026-01-07 00:47:33.275117 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [0] - kubernetes 2026-01-07 00:47:33.277824 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [1] -- kubeconfig 2026-01-07 00:47:33.277859 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [1] -- copy-kubeconfig 2026-01-07 00:47:33.278084 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [0] - ceph 2026-01-07 00:47:33.280324 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [1] -- ceph-pools 2026-01-07 00:47:33.280348 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [2] --- copy-ceph-keys 2026-01-07 00:47:33.280571 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [3] ---- cephclient 2026-01-07 00:47:33.280924 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-07 00:47:33.280940 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [4] ----- wait-for-keystone 2026-01-07 00:47:33.280946 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-07 00:47:33.280952 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [5] ------ glance 2026-01-07 00:47:33.281045 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [5] ------ cinder 2026-01-07 00:47:33.281079 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [5] ------ nova 2026-01-07 00:47:33.281454 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [4] ----- prometheus 2026-01-07 00:47:33.281469 | orchestrator | 2026-01-07 00:47:33 | INFO  | A [5] ------ grafana 2026-01-07 00:47:33.482783 | orchestrator | 2026-01-07 00:47:33 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-07 00:47:33.482872 | orchestrator | 2026-01-07 00:47:33 | INFO  | Tasks are running in the background 2026-01-07 00:47:36.340735 | orchestrator | 2026-01-07 00:47:36 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-07 00:47:38.449316 | orchestrator | 2026-01-07 00:47:38 | INFO  | Task e1ac6b73-f513-4a1b-b7e5-925d45b99632 is in state STARTED 2026-01-07 00:47:38.449408 | orchestrator | 2026-01-07 00:47:38 | INFO  | Task c8a9feec-51f2-4cf1-b9a5-736be122badb is in state STARTED 2026-01-07 00:47:38.450321 | orchestrator | 2026-01-07 00:47:38 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:47:38.450874 | orchestrator | 2026-01-07 00:47:38 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:47:38.451509 | orchestrator | 2026-01-07 00:47:38 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state STARTED 2026-01-07 00:47:38.452188 | orchestrator | 2026-01-07 00:47:38 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:47:38.452666 | orchestrator | 2026-01-07 00:47:38 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:47:38.452708 | orchestrator | 2026-01-07 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:41.499583 | orchestrator | 2026-01-07 00:47:41 | INFO  | Task e1ac6b73-f513-4a1b-b7e5-925d45b99632 is in state STARTED 2026-01-07 00:47:41.499861 | orchestrator | 2026-01-07 00:47:41 | INFO  | Task c8a9feec-51f2-4cf1-b9a5-736be122badb is in state STARTED 2026-01-07 00:47:41.503574 | orchestrator | 2026-01-07 00:47:41 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:47:41.503718 | orchestrator | 2026-01-07 00:47:41 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:47:41.506789 | orchestrator | 2026-01-07 00:47:41 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state STARTED 2026-01-07 00:47:41.507143 | orchestrator | 2026-01-07 00:47:41 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:47:41.507758 | orchestrator | 2026-01-07 00:47:41 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:47:41.507820 | orchestrator | 2026-01-07 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:44.532877 | orchestrator | 2026-01-07 00:47:44 | INFO  | Task e1ac6b73-f513-4a1b-b7e5-925d45b99632 is in state STARTED 2026-01-07 00:47:44.533052 | orchestrator | 2026-01-07 00:47:44 | INFO  | Task c8a9feec-51f2-4cf1-b9a5-736be122badb is in state STARTED 2026-01-07 00:47:44.533416 | orchestrator | 2026-01-07 00:47:44 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:47:44.536466 | orchestrator | 2026-01-07 00:47:44 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:47:44.537149 | orchestrator | 2026-01-07 00:47:44 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state STARTED 2026-01-07 00:47:44.541552 | orchestrator | 2026-01-07 00:47:44 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:47:44.541789 | orchestrator | 2026-01-07 00:47:44 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:47:44.541909 | orchestrator | 2026-01-07 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:47.618207 | orchestrator | 2026-01-07 00:47:47 | INFO  | Task e1ac6b73-f513-4a1b-b7e5-925d45b99632 is in state STARTED 2026-01-07 00:47:47.618285 | orchestrator | 2026-01-07 00:47:47 | INFO  | Task c8a9feec-51f2-4cf1-b9a5-736be122badb is in state STARTED 2026-01-07 00:47:47.618291 | orchestrator | 2026-01-07 00:47:47 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:47:47.618295 | orchestrator | 2026-01-07 00:47:47 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:47:47.618300 | orchestrator | 2026-01-07 00:47:47 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state STARTED 2026-01-07 00:47:47.618304 | orchestrator | 2026-01-07 00:47:47 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:47:47.618308 | orchestrator | 2026-01-07 00:47:47 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:47:47.618313 | orchestrator | 2026-01-07 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:50.650052 | orchestrator | 2026-01-07 00:47:50 | INFO  | Task e1ac6b73-f513-4a1b-b7e5-925d45b99632 is in state STARTED 2026-01-07 00:47:50.652140 | orchestrator | 2026-01-07 00:47:50 | INFO  | Task c8a9feec-51f2-4cf1-b9a5-736be122badb is in state STARTED 2026-01-07 00:47:50.652815 | orchestrator | 2026-01-07 00:47:50 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:47:50.653550 | orchestrator | 2026-01-07 00:47:50 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:47:50.654279 | orchestrator | 2026-01-07 00:47:50 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state STARTED 2026-01-07 00:47:50.654976 | orchestrator | 2026-01-07 00:47:50 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:47:50.657345 | orchestrator | 2026-01-07 00:47:50 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:47:50.657381 | orchestrator | 2026-01-07 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:53.783155 | orchestrator | 2026-01-07 00:47:53 | INFO  | Task e1ac6b73-f513-4a1b-b7e5-925d45b99632 is in state STARTED 2026-01-07 00:47:53.783235 | orchestrator | 2026-01-07 00:47:53 | INFO  | Task c8a9feec-51f2-4cf1-b9a5-736be122badb is in state STARTED 2026-01-07 00:47:53.783241 | orchestrator | 2026-01-07 00:47:53 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:47:53.783245 | orchestrator | 2026-01-07 00:47:53 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:47:53.785830 | orchestrator | 2026-01-07 00:47:53 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state STARTED 2026-01-07 00:47:53.788032 | orchestrator | 2026-01-07 00:47:53 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:47:53.792211 | orchestrator | 2026-01-07 00:47:53 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:47:53.794149 | orchestrator | 2026-01-07 00:47:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:47:56.898165 | orchestrator | 2026-01-07 00:47:56 | INFO  | Task e1ac6b73-f513-4a1b-b7e5-925d45b99632 is in state STARTED 2026-01-07 00:47:56.899597 | orchestrator | 2026-01-07 00:47:56 | INFO  | Task c8a9feec-51f2-4cf1-b9a5-736be122badb is in state STARTED 2026-01-07 00:47:56.902266 | orchestrator | 2026-01-07 00:47:56 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:47:56.904531 | orchestrator | 2026-01-07 00:47:56 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:47:56.906429 | orchestrator | 2026-01-07 00:47:56 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state STARTED 2026-01-07 00:47:56.907339 | orchestrator | 2026-01-07 00:47:56 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:47:56.908205 | orchestrator | 2026-01-07 00:47:56 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:47:56.908261 | orchestrator | 2026-01-07 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:00.522868 | orchestrator | 2026-01-07 00:48:00.523001 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-07 00:48:00.523010 | orchestrator | 2026-01-07 00:48:00.523015 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-07 00:48:00.523020 | orchestrator | Wednesday 07 January 2026 00:47:45 +0000 (0:00:00.880) 0:00:00.880 ***** 2026-01-07 00:48:00.523025 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:48:00.523030 | orchestrator | changed: [testbed-manager] 2026-01-07 00:48:00.523035 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:48:00.523039 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:48:00.523043 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:48:00.523047 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:48:00.523051 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:48:00.523055 | orchestrator | 2026-01-07 00:48:00.523059 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-07 00:48:00.523063 | orchestrator | Wednesday 07 January 2026 00:47:49 +0000 (0:00:03.625) 0:00:04.505 ***** 2026-01-07 00:48:00.523067 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-07 00:48:00.523072 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-07 00:48:00.523076 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-07 00:48:00.523080 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-07 00:48:00.523083 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-07 00:48:00.523087 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-07 00:48:00.523091 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-07 00:48:00.523095 | orchestrator | 2026-01-07 00:48:00.523099 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-07 00:48:00.523105 | orchestrator | Wednesday 07 January 2026 00:47:52 +0000 (0:00:02.511) 0:00:07.017 ***** 2026-01-07 00:48:00.523112 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:47:50.247764', 'end': '2026-01-07 00:47:50.252304', 'delta': '0:00:00.004540', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:48:00.523128 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:47:50.331845', 'end': '2026-01-07 00:47:50.338320', 'delta': '0:00:00.006475', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:48:00.523152 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:47:50.371600', 'end': '2026-01-07 00:47:50.377032', 'delta': '0:00:00.005432', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:48:00.523179 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:47:50.331310', 'end': '2026-01-07 00:47:50.337580', 'delta': '0:00:00.006270', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:48:00.523183 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:47:50.934360', 'end': '2026-01-07 00:47:50.940480', 'delta': '0:00:00.006120', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:48:00.523187 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:47:51.175853', 'end': '2026-01-07 00:47:51.180904', 'delta': '0:00:00.005051', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:48:00.523204 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-07 00:47:50.845885', 'end': '2026-01-07 00:47:51.851536', 'delta': '0:00:01.005651', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-07 00:48:00.523216 | orchestrator | 2026-01-07 00:48:00.523221 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-07 00:48:00.523224 | orchestrator | Wednesday 07 January 2026 00:47:53 +0000 (0:00:01.426) 0:00:08.458 ***** 2026-01-07 00:48:00.523228 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-07 00:48:00.523232 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-07 00:48:00.523236 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-07 00:48:00.523240 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-07 00:48:00.523243 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-07 00:48:00.523247 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-07 00:48:00.523251 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-07 00:48:00.523254 | orchestrator | 2026-01-07 00:48:00.523259 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-07 00:48:00.523265 | orchestrator | Wednesday 07 January 2026 00:47:54 +0000 (0:00:01.091) 0:00:09.549 ***** 2026-01-07 00:48:00.523272 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-07 00:48:00.523278 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-07 00:48:00.523283 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-07 00:48:00.523289 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-07 00:48:00.523295 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-07 00:48:00.523301 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-07 00:48:00.523307 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-07 00:48:00.523312 | orchestrator | 2026-01-07 00:48:00.523318 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:48:00.523330 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:48:00.523340 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:48:00.523347 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:48:00.523353 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:48:00.523359 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:48:00.523367 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:48:00.523371 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:48:00.523375 | orchestrator | 2026-01-07 00:48:00.523378 | orchestrator | 2026-01-07 00:48:00.523382 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:48:00.523386 | orchestrator | Wednesday 07 January 2026 00:47:56 +0000 (0:00:02.114) 0:00:11.663 ***** 2026-01-07 00:48:00.523390 | orchestrator | =============================================================================== 2026-01-07 00:48:00.523393 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.63s 2026-01-07 00:48:00.523402 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.53s 2026-01-07 00:48:00.523406 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.11s 2026-01-07 00:48:00.523410 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.43s 2026-01-07 00:48:00.523414 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.09s 2026-01-07 00:48:00.523417 | orchestrator | 2026-01-07 00:48:00 | INFO  | Task e1ac6b73-f513-4a1b-b7e5-925d45b99632 is in state STARTED 2026-01-07 00:48:00.523422 | orchestrator | 2026-01-07 00:48:00 | INFO  | Task c8a9feec-51f2-4cf1-b9a5-736be122badb is in state SUCCESS 2026-01-07 00:48:00.523425 | orchestrator | 2026-01-07 00:48:00 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:00.523429 | orchestrator | 2026-01-07 00:48:00 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:00.523433 | orchestrator | 2026-01-07 00:48:00 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state STARTED 2026-01-07 00:48:00.523440 | orchestrator | 2026-01-07 00:48:00 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:00.523444 | orchestrator | 2026-01-07 00:48:00 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:00.523447 | orchestrator | 2026-01-07 00:48:00 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:00.523451 | orchestrator | 2026-01-07 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:03.170062 | orchestrator | 2026-01-07 00:48:03 | INFO  | Task e1ac6b73-f513-4a1b-b7e5-925d45b99632 is in state STARTED 2026-01-07 00:48:03.170121 | orchestrator | 2026-01-07 00:48:03 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:03.170125 | orchestrator | 2026-01-07 00:48:03 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:03.170129 | orchestrator | 2026-01-07 00:48:03 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state STARTED 2026-01-07 00:48:03.170133 | orchestrator | 2026-01-07 00:48:03 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:03.170136 | orchestrator | 2026-01-07 00:48:03 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:03.170140 | orchestrator | 2026-01-07 00:48:03 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:03.170143 | orchestrator | 2026-01-07 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:06.157194 | orchestrator | 2026-01-07 00:48:06 | INFO  | Task e1ac6b73-f513-4a1b-b7e5-925d45b99632 is in state STARTED 2026-01-07 00:48:06.157263 | orchestrator | 2026-01-07 00:48:06 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:06.157273 | orchestrator | 2026-01-07 00:48:06 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:06.157277 | orchestrator | 2026-01-07 00:48:06 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state STARTED 2026-01-07 00:48:06.157283 | orchestrator | 2026-01-07 00:48:06 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:06.157293 | orchestrator | 2026-01-07 00:48:06 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:06.157300 | orchestrator | 2026-01-07 00:48:06 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:06.157307 | orchestrator | 2026-01-07 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:09.196091 | orchestrator | 2026-01-07 00:48:09 | INFO  | Task e1ac6b73-f513-4a1b-b7e5-925d45b99632 is in state STARTED 2026-01-07 00:48:09.196276 | orchestrator | 2026-01-07 00:48:09 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:09.196289 | orchestrator | 2026-01-07 00:48:09 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:09.196297 | orchestrator | 2026-01-07 00:48:09 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state STARTED 2026-01-07 00:48:09.196310 | orchestrator | 2026-01-07 00:48:09 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:09.201298 | orchestrator | 2026-01-07 00:48:09 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:09.201409 | orchestrator | 2026-01-07 00:48:09 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:09.201420 | orchestrator | 2026-01-07 00:48:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:12.262245 | orchestrator | 2026-01-07 00:48:12 | INFO  | Task e1ac6b73-f513-4a1b-b7e5-925d45b99632 is in state STARTED 2026-01-07 00:48:12.263213 | orchestrator | 2026-01-07 00:48:12 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:12.264694 | orchestrator | 2026-01-07 00:48:12 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:12.268614 | orchestrator | 2026-01-07 00:48:12 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state STARTED 2026-01-07 00:48:12.268703 | orchestrator | 2026-01-07 00:48:12 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:12.269824 | orchestrator | 2026-01-07 00:48:12 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:12.275220 | orchestrator | 2026-01-07 00:48:12 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:12.275344 | orchestrator | 2026-01-07 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:15.343971 | orchestrator | 2026-01-07 00:48:15 | INFO  | Task e1ac6b73-f513-4a1b-b7e5-925d45b99632 is in state STARTED 2026-01-07 00:48:15.345418 | orchestrator | 2026-01-07 00:48:15 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:15.348251 | orchestrator | 2026-01-07 00:48:15 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:15.348803 | orchestrator | 2026-01-07 00:48:15 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state STARTED 2026-01-07 00:48:15.351626 | orchestrator | 2026-01-07 00:48:15 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:15.352310 | orchestrator | 2026-01-07 00:48:15 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:15.357284 | orchestrator | 2026-01-07 00:48:15 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:15.357364 | orchestrator | 2026-01-07 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:18.473467 | orchestrator | 2026-01-07 00:48:18 | INFO  | Task e1ac6b73-f513-4a1b-b7e5-925d45b99632 is in state STARTED 2026-01-07 00:48:18.473551 | orchestrator | 2026-01-07 00:48:18 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:18.474213 | orchestrator | 2026-01-07 00:48:18 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:18.475499 | orchestrator | 2026-01-07 00:48:18 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state STARTED 2026-01-07 00:48:18.476668 | orchestrator | 2026-01-07 00:48:18 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:18.479489 | orchestrator | 2026-01-07 00:48:18 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:18.480641 | orchestrator | 2026-01-07 00:48:18 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:18.480690 | orchestrator | 2026-01-07 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:21.578196 | orchestrator | 2026-01-07 00:48:21 | INFO  | Task e1ac6b73-f513-4a1b-b7e5-925d45b99632 is in state STARTED 2026-01-07 00:48:21.578291 | orchestrator | 2026-01-07 00:48:21 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:21.578300 | orchestrator | 2026-01-07 00:48:21 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:21.578318 | orchestrator | 2026-01-07 00:48:21 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state STARTED 2026-01-07 00:48:21.578330 | orchestrator | 2026-01-07 00:48:21 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:21.578336 | orchestrator | 2026-01-07 00:48:21 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:21.578357 | orchestrator | 2026-01-07 00:48:21 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:21.578363 | orchestrator | 2026-01-07 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:24.595530 | orchestrator | 2026-01-07 00:48:24 | INFO  | Task e1ac6b73-f513-4a1b-b7e5-925d45b99632 is in state SUCCESS 2026-01-07 00:48:24.595637 | orchestrator | 2026-01-07 00:48:24 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:24.596955 | orchestrator | 2026-01-07 00:48:24 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:24.597396 | orchestrator | 2026-01-07 00:48:24 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state STARTED 2026-01-07 00:48:24.597972 | orchestrator | 2026-01-07 00:48:24 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:24.598404 | orchestrator | 2026-01-07 00:48:24 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:24.599001 | orchestrator | 2026-01-07 00:48:24 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:24.599028 | orchestrator | 2026-01-07 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:27.646440 | orchestrator | 2026-01-07 00:48:27 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:27.651488 | orchestrator | 2026-01-07 00:48:27 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:27.651556 | orchestrator | 2026-01-07 00:48:27 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state STARTED 2026-01-07 00:48:27.652286 | orchestrator | 2026-01-07 00:48:27 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:27.653139 | orchestrator | 2026-01-07 00:48:27 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:27.653667 | orchestrator | 2026-01-07 00:48:27 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:27.654888 | orchestrator | 2026-01-07 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:30.698275 | orchestrator | 2026-01-07 00:48:30 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:30.699832 | orchestrator | 2026-01-07 00:48:30 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:30.699945 | orchestrator | 2026-01-07 00:48:30 | INFO  | Task 890d1b4e-f460-4b66-b65d-e6af0dd4fb3c is in state SUCCESS 2026-01-07 00:48:30.700456 | orchestrator | 2026-01-07 00:48:30 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:30.701081 | orchestrator | 2026-01-07 00:48:30 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:30.702816 | orchestrator | 2026-01-07 00:48:30 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:30.703113 | orchestrator | 2026-01-07 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:33.735986 | orchestrator | 2026-01-07 00:48:33 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:33.736841 | orchestrator | 2026-01-07 00:48:33 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:33.740513 | orchestrator | 2026-01-07 00:48:33 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:33.743040 | orchestrator | 2026-01-07 00:48:33 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:33.745892 | orchestrator | 2026-01-07 00:48:33 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:33.745944 | orchestrator | 2026-01-07 00:48:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:36.848489 | orchestrator | 2026-01-07 00:48:36 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:36.852088 | orchestrator | 2026-01-07 00:48:36 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:36.853614 | orchestrator | 2026-01-07 00:48:36 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:36.856299 | orchestrator | 2026-01-07 00:48:36 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:36.859517 | orchestrator | 2026-01-07 00:48:36 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:36.859590 | orchestrator | 2026-01-07 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:39.933826 | orchestrator | 2026-01-07 00:48:39 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:39.934612 | orchestrator | 2026-01-07 00:48:39 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:39.941238 | orchestrator | 2026-01-07 00:48:39 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:39.946166 | orchestrator | 2026-01-07 00:48:39 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:39.952175 | orchestrator | 2026-01-07 00:48:39 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:39.952257 | orchestrator | 2026-01-07 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:43.031966 | orchestrator | 2026-01-07 00:48:43 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:43.039856 | orchestrator | 2026-01-07 00:48:43 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:43.043783 | orchestrator | 2026-01-07 00:48:43 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:43.080120 | orchestrator | 2026-01-07 00:48:43 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:43.080180 | orchestrator | 2026-01-07 00:48:43 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:43.080211 | orchestrator | 2026-01-07 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:46.089622 | orchestrator | 2026-01-07 00:48:46 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:46.093020 | orchestrator | 2026-01-07 00:48:46 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:46.093667 | orchestrator | 2026-01-07 00:48:46 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:46.094476 | orchestrator | 2026-01-07 00:48:46 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:46.095635 | orchestrator | 2026-01-07 00:48:46 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:46.095707 | orchestrator | 2026-01-07 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:49.159503 | orchestrator | 2026-01-07 00:48:49 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:49.160236 | orchestrator | 2026-01-07 00:48:49 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:49.161584 | orchestrator | 2026-01-07 00:48:49 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:49.164686 | orchestrator | 2026-01-07 00:48:49 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:49.166560 | orchestrator | 2026-01-07 00:48:49 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:49.166630 | orchestrator | 2026-01-07 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:52.198245 | orchestrator | 2026-01-07 00:48:52 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:52.204356 | orchestrator | 2026-01-07 00:48:52 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:52.204449 | orchestrator | 2026-01-07 00:48:52 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:52.204459 | orchestrator | 2026-01-07 00:48:52 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:52.204467 | orchestrator | 2026-01-07 00:48:52 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:52.204475 | orchestrator | 2026-01-07 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:55.247727 | orchestrator | 2026-01-07 00:48:55 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:55.248234 | orchestrator | 2026-01-07 00:48:55 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:55.249096 | orchestrator | 2026-01-07 00:48:55 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:55.249774 | orchestrator | 2026-01-07 00:48:55 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:55.251114 | orchestrator | 2026-01-07 00:48:55 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:55.251171 | orchestrator | 2026-01-07 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:48:58.282784 | orchestrator | 2026-01-07 00:48:58 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:48:58.284453 | orchestrator | 2026-01-07 00:48:58 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:48:58.285845 | orchestrator | 2026-01-07 00:48:58 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:48:58.287126 | orchestrator | 2026-01-07 00:48:58 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:48:58.288509 | orchestrator | 2026-01-07 00:48:58 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:48:58.288653 | orchestrator | 2026-01-07 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:01.327691 | orchestrator | 2026-01-07 00:49:01 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:01.329106 | orchestrator | 2026-01-07 00:49:01 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:01.332186 | orchestrator | 2026-01-07 00:49:01 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:01.336033 | orchestrator | 2026-01-07 00:49:01 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:49:01.338924 | orchestrator | 2026-01-07 00:49:01 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state STARTED 2026-01-07 00:49:01.338994 | orchestrator | 2026-01-07 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:04.383352 | orchestrator | 2026-01-07 00:49:04 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:04.385596 | orchestrator | 2026-01-07 00:49:04 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:04.389102 | orchestrator | 2026-01-07 00:49:04 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:04.390173 | orchestrator | 2026-01-07 00:49:04 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:49:04.392546 | orchestrator | 2026-01-07 00:49:04 | INFO  | Task 0136fe71-b33e-4051-8f7e-4a12c7c7eeb3 is in state SUCCESS 2026-01-07 00:49:04.395513 | orchestrator | 2026-01-07 00:49:04.395582 | orchestrator | 2026-01-07 00:49:04.395597 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-07 00:49:04.395609 | orchestrator | 2026-01-07 00:49:04.395621 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-07 00:49:04.395633 | orchestrator | Wednesday 07 January 2026 00:47:45 +0000 (0:00:00.565) 0:00:00.565 ***** 2026-01-07 00:49:04.395644 | orchestrator | ok: [testbed-manager] => { 2026-01-07 00:49:04.395657 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-07 00:49:04.395669 | orchestrator | } 2026-01-07 00:49:04.395680 | orchestrator | 2026-01-07 00:49:04.395691 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-07 00:49:04.395702 | orchestrator | Wednesday 07 January 2026 00:47:45 +0000 (0:00:00.363) 0:00:00.932 ***** 2026-01-07 00:49:04.395713 | orchestrator | ok: [testbed-manager] 2026-01-07 00:49:04.395724 | orchestrator | 2026-01-07 00:49:04.395736 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-07 00:49:04.395747 | orchestrator | Wednesday 07 January 2026 00:47:47 +0000 (0:00:02.025) 0:00:02.957 ***** 2026-01-07 00:49:04.395758 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-07 00:49:04.395769 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-07 00:49:04.395780 | orchestrator | 2026-01-07 00:49:04.395791 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-07 00:49:04.395802 | orchestrator | Wednesday 07 January 2026 00:47:49 +0000 (0:00:01.784) 0:00:04.742 ***** 2026-01-07 00:49:04.395813 | orchestrator | changed: [testbed-manager] 2026-01-07 00:49:04.395824 | orchestrator | 2026-01-07 00:49:04.395835 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-07 00:49:04.395846 | orchestrator | Wednesday 07 January 2026 00:47:51 +0000 (0:00:02.405) 0:00:07.147 ***** 2026-01-07 00:49:04.395909 | orchestrator | changed: [testbed-manager] 2026-01-07 00:49:04.395923 | orchestrator | 2026-01-07 00:49:04.395933 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-07 00:49:04.395944 | orchestrator | Wednesday 07 January 2026 00:47:53 +0000 (0:00:01.611) 0:00:08.758 ***** 2026-01-07 00:49:04.395955 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-07 00:49:04.395966 | orchestrator | ok: [testbed-manager] 2026-01-07 00:49:04.395977 | orchestrator | 2026-01-07 00:49:04.395988 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-07 00:49:04.395999 | orchestrator | Wednesday 07 January 2026 00:48:18 +0000 (0:00:25.426) 0:00:34.185 ***** 2026-01-07 00:49:04.396010 | orchestrator | changed: [testbed-manager] 2026-01-07 00:49:04.396020 | orchestrator | 2026-01-07 00:49:04.396031 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:49:04.396050 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:49:04.396062 | orchestrator | 2026-01-07 00:49:04.396073 | orchestrator | 2026-01-07 00:49:04.396084 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:49:04.396095 | orchestrator | Wednesday 07 January 2026 00:48:21 +0000 (0:00:02.604) 0:00:36.789 ***** 2026-01-07 00:49:04.396106 | orchestrator | =============================================================================== 2026-01-07 00:49:04.396117 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.43s 2026-01-07 00:49:04.396128 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.60s 2026-01-07 00:49:04.396139 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.41s 2026-01-07 00:49:04.396149 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.03s 2026-01-07 00:49:04.396160 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.78s 2026-01-07 00:49:04.396171 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.61s 2026-01-07 00:49:04.396182 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.37s 2026-01-07 00:49:04.396192 | orchestrator | 2026-01-07 00:49:04.396203 | orchestrator | 2026-01-07 00:49:04.396214 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-07 00:49:04.396226 | orchestrator | 2026-01-07 00:49:04.396237 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-07 00:49:04.396247 | orchestrator | Wednesday 07 January 2026 00:47:45 +0000 (0:00:00.686) 0:00:00.686 ***** 2026-01-07 00:49:04.396258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-07 00:49:04.396271 | orchestrator | 2026-01-07 00:49:04.396282 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-07 00:49:04.396293 | orchestrator | Wednesday 07 January 2026 00:47:45 +0000 (0:00:00.203) 0:00:00.890 ***** 2026-01-07 00:49:04.396303 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-07 00:49:04.396314 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-07 00:49:04.396325 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-07 00:49:04.396336 | orchestrator | 2026-01-07 00:49:04.396347 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-07 00:49:04.396358 | orchestrator | Wednesday 07 January 2026 00:47:48 +0000 (0:00:02.488) 0:00:03.378 ***** 2026-01-07 00:49:04.396369 | orchestrator | changed: [testbed-manager] 2026-01-07 00:49:04.396380 | orchestrator | 2026-01-07 00:49:04.396391 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-07 00:49:04.396402 | orchestrator | Wednesday 07 January 2026 00:47:51 +0000 (0:00:02.845) 0:00:06.223 ***** 2026-01-07 00:49:04.396428 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-07 00:49:04.396447 | orchestrator | ok: [testbed-manager] 2026-01-07 00:49:04.396458 | orchestrator | 2026-01-07 00:49:04.396469 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-07 00:49:04.396480 | orchestrator | Wednesday 07 January 2026 00:48:23 +0000 (0:00:32.581) 0:00:38.804 ***** 2026-01-07 00:49:04.396490 | orchestrator | changed: [testbed-manager] 2026-01-07 00:49:04.396501 | orchestrator | 2026-01-07 00:49:04.396513 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-07 00:49:04.396524 | orchestrator | Wednesday 07 January 2026 00:48:25 +0000 (0:00:01.301) 0:00:40.106 ***** 2026-01-07 00:49:04.396535 | orchestrator | ok: [testbed-manager] 2026-01-07 00:49:04.396546 | orchestrator | 2026-01-07 00:49:04.396557 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-07 00:49:04.396568 | orchestrator | Wednesday 07 January 2026 00:48:25 +0000 (0:00:00.660) 0:00:40.767 ***** 2026-01-07 00:49:04.396579 | orchestrator | changed: [testbed-manager] 2026-01-07 00:49:04.396590 | orchestrator | 2026-01-07 00:49:04.396601 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-07 00:49:04.396612 | orchestrator | Wednesday 07 January 2026 00:48:27 +0000 (0:00:01.889) 0:00:42.656 ***** 2026-01-07 00:49:04.396623 | orchestrator | changed: [testbed-manager] 2026-01-07 00:49:04.396634 | orchestrator | 2026-01-07 00:49:04.396645 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-07 00:49:04.396656 | orchestrator | Wednesday 07 January 2026 00:48:28 +0000 (0:00:01.312) 0:00:43.969 ***** 2026-01-07 00:49:04.396666 | orchestrator | changed: [testbed-manager] 2026-01-07 00:49:04.396678 | orchestrator | 2026-01-07 00:49:04.396688 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-07 00:49:04.396699 | orchestrator | Wednesday 07 January 2026 00:48:29 +0000 (0:00:00.523) 0:00:44.493 ***** 2026-01-07 00:49:04.396710 | orchestrator | ok: [testbed-manager] 2026-01-07 00:49:04.396721 | orchestrator | 2026-01-07 00:49:04.396732 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:49:04.396743 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:49:04.396754 | orchestrator | 2026-01-07 00:49:04.396765 | orchestrator | 2026-01-07 00:49:04.396776 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:49:04.396787 | orchestrator | Wednesday 07 January 2026 00:48:29 +0000 (0:00:00.375) 0:00:44.868 ***** 2026-01-07 00:49:04.396798 | orchestrator | =============================================================================== 2026-01-07 00:49:04.396809 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.58s 2026-01-07 00:49:04.396820 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.85s 2026-01-07 00:49:04.396831 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.49s 2026-01-07 00:49:04.396846 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.89s 2026-01-07 00:49:04.396858 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.31s 2026-01-07 00:49:04.396915 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.30s 2026-01-07 00:49:04.396926 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.66s 2026-01-07 00:49:04.396937 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.52s 2026-01-07 00:49:04.396948 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.38s 2026-01-07 00:49:04.396960 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.20s 2026-01-07 00:49:04.396970 | orchestrator | 2026-01-07 00:49:04.396981 | orchestrator | 2026-01-07 00:49:04.396992 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:49:04.397003 | orchestrator | 2026-01-07 00:49:04.397014 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:49:04.397032 | orchestrator | Wednesday 07 January 2026 00:47:45 +0000 (0:00:00.535) 0:00:00.535 ***** 2026-01-07 00:49:04.397043 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-07 00:49:04.397054 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-07 00:49:04.397065 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-07 00:49:04.397076 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-07 00:49:04.397087 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-07 00:49:04.397098 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-07 00:49:04.397108 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-07 00:49:04.397119 | orchestrator | 2026-01-07 00:49:04.397131 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-07 00:49:04.397142 | orchestrator | 2026-01-07 00:49:04.397153 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-07 00:49:04.397163 | orchestrator | Wednesday 07 January 2026 00:47:47 +0000 (0:00:02.161) 0:00:02.697 ***** 2026-01-07 00:49:04.397335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:49:04.397418 | orchestrator | 2026-01-07 00:49:04.397430 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-07 00:49:04.397440 | orchestrator | Wednesday 07 January 2026 00:47:48 +0000 (0:00:01.512) 0:00:04.209 ***** 2026-01-07 00:49:04.397447 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:49:04.397456 | orchestrator | ok: [testbed-manager] 2026-01-07 00:49:04.397464 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:49:04.397471 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:49:04.397478 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:49:04.397504 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:49:04.397512 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:49:04.397520 | orchestrator | 2026-01-07 00:49:04.397527 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-07 00:49:04.397535 | orchestrator | Wednesday 07 January 2026 00:47:51 +0000 (0:00:02.334) 0:00:06.543 ***** 2026-01-07 00:49:04.397542 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:49:04.397550 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:49:04.397557 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:49:04.397568 | orchestrator | ok: [testbed-manager] 2026-01-07 00:49:04.397580 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:49:04.397592 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:49:04.397604 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:49:04.397617 | orchestrator | 2026-01-07 00:49:04.397630 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-07 00:49:04.397638 | orchestrator | Wednesday 07 January 2026 00:47:55 +0000 (0:00:04.140) 0:00:10.684 ***** 2026-01-07 00:49:04.397645 | orchestrator | changed: [testbed-manager] 2026-01-07 00:49:04.397653 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:49:04.397660 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:49:04.397667 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:49:04.397674 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:49:04.397681 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:49:04.397688 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:49:04.397696 | orchestrator | 2026-01-07 00:49:04.397703 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-07 00:49:04.397710 | orchestrator | Wednesday 07 January 2026 00:47:57 +0000 (0:00:01.818) 0:00:12.502 ***** 2026-01-07 00:49:04.397717 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:49:04.397725 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:49:04.397731 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:49:04.397748 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:49:04.397774 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:49:04.397781 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:49:04.397788 | orchestrator | changed: [testbed-manager] 2026-01-07 00:49:04.397795 | orchestrator | 2026-01-07 00:49:04.397803 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-07 00:49:04.397810 | orchestrator | Wednesday 07 January 2026 00:48:07 +0000 (0:00:10.447) 0:00:22.950 ***** 2026-01-07 00:49:04.397817 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:49:04.397824 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:49:04.397831 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:49:04.397838 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:49:04.397845 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:49:04.397852 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:49:04.397881 | orchestrator | changed: [testbed-manager] 2026-01-07 00:49:04.397895 | orchestrator | 2026-01-07 00:49:04.397902 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-07 00:49:04.397910 | orchestrator | Wednesday 07 January 2026 00:48:43 +0000 (0:00:36.439) 0:00:59.390 ***** 2026-01-07 00:49:04.397918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:49:04.397927 | orchestrator | 2026-01-07 00:49:04.397934 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-07 00:49:04.397941 | orchestrator | Wednesday 07 January 2026 00:48:45 +0000 (0:00:01.503) 0:01:00.893 ***** 2026-01-07 00:49:04.397949 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-07 00:49:04.397957 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-07 00:49:04.397964 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-07 00:49:04.397971 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-07 00:49:04.397978 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-07 00:49:04.397986 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-07 00:49:04.397993 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-07 00:49:04.398000 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-07 00:49:04.398007 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-07 00:49:04.398086 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-07 00:49:04.398095 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-07 00:49:04.398102 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-07 00:49:04.398109 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-07 00:49:04.398117 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-07 00:49:04.398124 | orchestrator | 2026-01-07 00:49:04.398132 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-07 00:49:04.398140 | orchestrator | Wednesday 07 January 2026 00:48:50 +0000 (0:00:05.507) 0:01:06.401 ***** 2026-01-07 00:49:04.398147 | orchestrator | ok: [testbed-manager] 2026-01-07 00:49:04.398155 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:49:04.398162 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:49:04.398169 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:49:04.398176 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:49:04.398183 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:49:04.398190 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:49:04.398198 | orchestrator | 2026-01-07 00:49:04.398205 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-07 00:49:04.398212 | orchestrator | Wednesday 07 January 2026 00:48:51 +0000 (0:00:01.003) 0:01:07.404 ***** 2026-01-07 00:49:04.398219 | orchestrator | changed: [testbed-manager] 2026-01-07 00:49:04.398226 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:49:04.398233 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:49:04.398247 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:49:04.398254 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:49:04.398261 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:49:04.398268 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:49:04.398276 | orchestrator | 2026-01-07 00:49:04.398283 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-07 00:49:04.398299 | orchestrator | Wednesday 07 January 2026 00:48:53 +0000 (0:00:01.472) 0:01:08.876 ***** 2026-01-07 00:49:04.398307 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:49:04.398315 | orchestrator | ok: [testbed-manager] 2026-01-07 00:49:04.398322 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:49:04.398329 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:49:04.398336 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:49:04.398343 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:49:04.398350 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:49:04.398357 | orchestrator | 2026-01-07 00:49:04.398365 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-07 00:49:04.398404 | orchestrator | Wednesday 07 January 2026 00:48:54 +0000 (0:00:01.132) 0:01:10.009 ***** 2026-01-07 00:49:04.398412 | orchestrator | ok: [testbed-manager] 2026-01-07 00:49:04.398419 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:49:04.398426 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:49:04.398433 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:49:04.398441 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:49:04.398448 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:49:04.398455 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:49:04.398462 | orchestrator | 2026-01-07 00:49:04.398469 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-07 00:49:04.398476 | orchestrator | Wednesday 07 January 2026 00:48:56 +0000 (0:00:01.686) 0:01:11.696 ***** 2026-01-07 00:49:04.398484 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-07 00:49:04.398492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:49:04.398500 | orchestrator | 2026-01-07 00:49:04.398507 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-07 00:49:04.398514 | orchestrator | Wednesday 07 January 2026 00:48:57 +0000 (0:00:01.345) 0:01:13.041 ***** 2026-01-07 00:49:04.398521 | orchestrator | changed: [testbed-manager] 2026-01-07 00:49:04.398529 | orchestrator | 2026-01-07 00:49:04.398536 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-07 00:49:04.398543 | orchestrator | Wednesday 07 January 2026 00:48:59 +0000 (0:00:01.680) 0:01:14.722 ***** 2026-01-07 00:49:04.398550 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:49:04.398557 | orchestrator | changed: [testbed-manager] 2026-01-07 00:49:04.398565 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:49:04.398572 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:49:04.398579 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:49:04.398586 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:49:04.398593 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:49:04.398600 | orchestrator | 2026-01-07 00:49:04.398607 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:49:04.398615 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:49:04.398631 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:49:04.398639 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:49:04.398646 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:49:04.398658 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:49:04.398665 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:49:04.398672 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:49:04.398679 | orchestrator | 2026-01-07 00:49:04.398687 | orchestrator | 2026-01-07 00:49:04.398694 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:49:04.398701 | orchestrator | Wednesday 07 January 2026 00:49:02 +0000 (0:00:03.060) 0:01:17.783 ***** 2026-01-07 00:49:04.398708 | orchestrator | =============================================================================== 2026-01-07 00:49:04.398716 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 36.44s 2026-01-07 00:49:04.398723 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.45s 2026-01-07 00:49:04.398730 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.50s 2026-01-07 00:49:04.398737 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.14s 2026-01-07 00:49:04.398744 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.06s 2026-01-07 00:49:04.398751 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.33s 2026-01-07 00:49:04.398758 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.16s 2026-01-07 00:49:04.398765 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.82s 2026-01-07 00:49:04.398772 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.69s 2026-01-07 00:49:04.398779 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.68s 2026-01-07 00:49:04.398787 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.51s 2026-01-07 00:49:04.398798 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.51s 2026-01-07 00:49:04.398806 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.47s 2026-01-07 00:49:04.398813 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.35s 2026-01-07 00:49:04.398820 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.13s 2026-01-07 00:49:04.398827 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.00s 2026-01-07 00:49:04.398835 | orchestrator | 2026-01-07 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:07.431685 | orchestrator | 2026-01-07 00:49:07 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:07.432322 | orchestrator | 2026-01-07 00:49:07 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:07.432920 | orchestrator | 2026-01-07 00:49:07 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:07.433561 | orchestrator | 2026-01-07 00:49:07 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:49:07.435677 | orchestrator | 2026-01-07 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:10.478444 | orchestrator | 2026-01-07 00:49:10 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:10.480808 | orchestrator | 2026-01-07 00:49:10 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:10.483696 | orchestrator | 2026-01-07 00:49:10 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:10.486639 | orchestrator | 2026-01-07 00:49:10 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:49:10.486727 | orchestrator | 2026-01-07 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:13.539387 | orchestrator | 2026-01-07 00:49:13 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:13.542479 | orchestrator | 2026-01-07 00:49:13 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:13.545060 | orchestrator | 2026-01-07 00:49:13 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:13.547699 | orchestrator | 2026-01-07 00:49:13 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:49:13.547776 | orchestrator | 2026-01-07 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:16.603380 | orchestrator | 2026-01-07 00:49:16 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:16.603432 | orchestrator | 2026-01-07 00:49:16 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:16.603992 | orchestrator | 2026-01-07 00:49:16 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:16.605232 | orchestrator | 2026-01-07 00:49:16 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state STARTED 2026-01-07 00:49:16.605468 | orchestrator | 2026-01-07 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:19.666005 | orchestrator | 2026-01-07 00:49:19 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:19.669879 | orchestrator | 2026-01-07 00:49:19 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:19.674634 | orchestrator | 2026-01-07 00:49:19 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:19.676669 | orchestrator | 2026-01-07 00:49:19 | INFO  | Task 677eff53-7564-4fe5-b6b4-8a9cca966f0b is in state SUCCESS 2026-01-07 00:49:19.676726 | orchestrator | 2026-01-07 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:22.736031 | orchestrator | 2026-01-07 00:49:22 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:22.737092 | orchestrator | 2026-01-07 00:49:22 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:22.745195 | orchestrator | 2026-01-07 00:49:22 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:22.745254 | orchestrator | 2026-01-07 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:25.795819 | orchestrator | 2026-01-07 00:49:25 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:25.796305 | orchestrator | 2026-01-07 00:49:25 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:25.797296 | orchestrator | 2026-01-07 00:49:25 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:25.797333 | orchestrator | 2026-01-07 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:28.850492 | orchestrator | 2026-01-07 00:49:28 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:28.852241 | orchestrator | 2026-01-07 00:49:28 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:28.853945 | orchestrator | 2026-01-07 00:49:28 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:28.854087 | orchestrator | 2026-01-07 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:31.894201 | orchestrator | 2026-01-07 00:49:31 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:31.896444 | orchestrator | 2026-01-07 00:49:31 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:31.897625 | orchestrator | 2026-01-07 00:49:31 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:31.897669 | orchestrator | 2026-01-07 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:34.955168 | orchestrator | 2026-01-07 00:49:34 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:34.955417 | orchestrator | 2026-01-07 00:49:34 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:34.955814 | orchestrator | 2026-01-07 00:49:34 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:34.955929 | orchestrator | 2026-01-07 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:38.001925 | orchestrator | 2026-01-07 00:49:38 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:38.008651 | orchestrator | 2026-01-07 00:49:38 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:38.010186 | orchestrator | 2026-01-07 00:49:38 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:38.011116 | orchestrator | 2026-01-07 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:41.051869 | orchestrator | 2026-01-07 00:49:41 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:41.052724 | orchestrator | 2026-01-07 00:49:41 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:41.054834 | orchestrator | 2026-01-07 00:49:41 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:41.054910 | orchestrator | 2026-01-07 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:44.102218 | orchestrator | 2026-01-07 00:49:44 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:44.105810 | orchestrator | 2026-01-07 00:49:44 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:44.108187 | orchestrator | 2026-01-07 00:49:44 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:44.108511 | orchestrator | 2026-01-07 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:47.144967 | orchestrator | 2026-01-07 00:49:47 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:47.147913 | orchestrator | 2026-01-07 00:49:47 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:47.147988 | orchestrator | 2026-01-07 00:49:47 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:47.148000 | orchestrator | 2026-01-07 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:50.177759 | orchestrator | 2026-01-07 00:49:50 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:50.179085 | orchestrator | 2026-01-07 00:49:50 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:50.181306 | orchestrator | 2026-01-07 00:49:50 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:50.181348 | orchestrator | 2026-01-07 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:53.210108 | orchestrator | 2026-01-07 00:49:53 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:53.210233 | orchestrator | 2026-01-07 00:49:53 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:53.211323 | orchestrator | 2026-01-07 00:49:53 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:53.211361 | orchestrator | 2026-01-07 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:56.248274 | orchestrator | 2026-01-07 00:49:56 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:56.249899 | orchestrator | 2026-01-07 00:49:56 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:56.251464 | orchestrator | 2026-01-07 00:49:56 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:56.251865 | orchestrator | 2026-01-07 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:49:59.301549 | orchestrator | 2026-01-07 00:49:59 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:49:59.303321 | orchestrator | 2026-01-07 00:49:59 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state STARTED 2026-01-07 00:49:59.304755 | orchestrator | 2026-01-07 00:49:59 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:49:59.304783 | orchestrator | 2026-01-07 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:02.393522 | orchestrator | 2026-01-07 00:50:02 | INFO  | Task dcb92c6e-7b07-4183-941e-a9827e25db26 is in state STARTED 2026-01-07 00:50:02.393873 | orchestrator | 2026-01-07 00:50:02 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:02.395644 | orchestrator | 2026-01-07 00:50:02 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:02.407274 | orchestrator | 2026-01-07 00:50:02 | INFO  | Task 8c2dabd7-59c7-46ad-9bca-a41853d20f7b is in state SUCCESS 2026-01-07 00:50:02.409690 | orchestrator | 2026-01-07 00:50:02.409757 | orchestrator | 2026-01-07 00:50:02.409768 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-07 00:50:02.409776 | orchestrator | 2026-01-07 00:50:02.409783 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-07 00:50:02.409846 | orchestrator | Wednesday 07 January 2026 00:48:02 +0000 (0:00:00.502) 0:00:00.502 ***** 2026-01-07 00:50:02.409856 | orchestrator | ok: [testbed-manager] 2026-01-07 00:50:02.409864 | orchestrator | 2026-01-07 00:50:02.409871 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-07 00:50:02.409878 | orchestrator | Wednesday 07 January 2026 00:48:03 +0000 (0:00:01.164) 0:00:01.667 ***** 2026-01-07 00:50:02.409902 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-07 00:50:02.409910 | orchestrator | 2026-01-07 00:50:02.409916 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-07 00:50:02.409922 | orchestrator | Wednesday 07 January 2026 00:48:04 +0000 (0:00:00.623) 0:00:02.290 ***** 2026-01-07 00:50:02.409929 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:02.409935 | orchestrator | 2026-01-07 00:50:02.409941 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-07 00:50:02.409948 | orchestrator | Wednesday 07 January 2026 00:48:05 +0000 (0:00:01.185) 0:00:03.475 ***** 2026-01-07 00:50:02.409954 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-07 00:50:02.409961 | orchestrator | ok: [testbed-manager] 2026-01-07 00:50:02.409968 | orchestrator | 2026-01-07 00:50:02.409975 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-07 00:50:02.409981 | orchestrator | Wednesday 07 January 2026 00:49:04 +0000 (0:00:59.492) 0:01:02.968 ***** 2026-01-07 00:50:02.409988 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:02.410052 | orchestrator | 2026-01-07 00:50:02.410063 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:50:02.410071 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:02.410079 | orchestrator | 2026-01-07 00:50:02.410086 | orchestrator | 2026-01-07 00:50:02.410093 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:50:02.410100 | orchestrator | Wednesday 07 January 2026 00:49:17 +0000 (0:00:12.976) 0:01:15.945 ***** 2026-01-07 00:50:02.410107 | orchestrator | =============================================================================== 2026-01-07 00:50:02.410114 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 59.49s 2026-01-07 00:50:02.410120 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 12.98s 2026-01-07 00:50:02.410126 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.19s 2026-01-07 00:50:02.410133 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.16s 2026-01-07 00:50:02.410139 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.62s 2026-01-07 00:50:02.410146 | orchestrator | 2026-01-07 00:50:02.410153 | orchestrator | 2026-01-07 00:50:02.410159 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-07 00:50:02.410166 | orchestrator | 2026-01-07 00:50:02.410174 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-07 00:50:02.410181 | orchestrator | Wednesday 07 January 2026 00:47:38 +0000 (0:00:00.253) 0:00:00.253 ***** 2026-01-07 00:50:02.410188 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:50:02.410197 | orchestrator | 2026-01-07 00:50:02.410204 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-07 00:50:02.410210 | orchestrator | Wednesday 07 January 2026 00:47:39 +0000 (0:00:01.108) 0:00:01.362 ***** 2026-01-07 00:50:02.410217 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:50:02.410225 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:50:02.410232 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:50:02.410239 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:50:02.410247 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:50:02.410254 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:50:02.410261 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:50:02.410269 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:50:02.410276 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:50:02.410282 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:50:02.410289 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:50:02.410296 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:50:02.410305 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-07 00:50:02.410312 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:50:02.410319 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:50:02.410326 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:50:02.410347 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:50:02.410364 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-07 00:50:02.410372 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:50:02.410381 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:50:02.410389 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-07 00:50:02.410396 | orchestrator | 2026-01-07 00:50:02.410407 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-07 00:50:02.410416 | orchestrator | Wednesday 07 January 2026 00:47:43 +0000 (0:00:04.352) 0:00:05.715 ***** 2026-01-07 00:50:02.410425 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:50:02.410434 | orchestrator | 2026-01-07 00:50:02.410441 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-07 00:50:02.410449 | orchestrator | Wednesday 07 January 2026 00:47:44 +0000 (0:00:01.160) 0:00:06.875 ***** 2026-01-07 00:50:02.410460 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.410497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.410505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.410513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.410520 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.410558 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.410570 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.410579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.410587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.410595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.410603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.410610 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.410629 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.410637 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.410650 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.410662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.410669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.410676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.410682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.410689 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.410699 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.410706 | orchestrator | 2026-01-07 00:50:02.410712 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-07 00:50:02.410723 | orchestrator | Wednesday 07 January 2026 00:47:49 +0000 (0:00:04.620) 0:00:11.495 ***** 2026-01-07 00:50:02.410732 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:50:02.410740 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.410747 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.410754 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:50:02.410761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:50:02.410767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.410774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.410784 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:50:02.410809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:50:02.410824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.410834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.410841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:50:02.410854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.410861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.410868 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:50:02.410874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:50:02.410885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.410891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.410902 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:50:02.410909 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:50:02.410918 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:50:02.410926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.410932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.410939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:50:02.410946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.410957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.410964 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:50:02.410971 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:50:02.410977 | orchestrator | 2026-01-07 00:50:02.410984 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-07 00:50:02.410991 | orchestrator | Wednesday 07 January 2026 00:47:51 +0000 (0:00:01.768) 0:00:13.264 ***** 2026-01-07 00:50:02.410998 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:50:02.411026 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.411034 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.411040 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:50:02.411047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:50:02.411053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.411066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.411073 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:50:02.411079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:50:02.411086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.411096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.411106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:50:02.411113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.411120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.411131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:50:02.411139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.411146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.411152 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:50:02.411159 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:50:02.411166 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:50:02.411173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:50:02.411188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.411195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.411202 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:50:02.411208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-07 00:50:02.411219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.411226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.411232 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:50:02.411239 | orchestrator | 2026-01-07 00:50:02.411246 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-07 00:50:02.411253 | orchestrator | Wednesday 07 January 2026 00:47:55 +0000 (0:00:03.961) 0:00:17.225 ***** 2026-01-07 00:50:02.411259 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:50:02.411266 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:50:02.411272 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:50:02.411279 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:50:02.411285 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:50:02.411291 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:50:02.411298 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:50:02.411304 | orchestrator | 2026-01-07 00:50:02.411311 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-07 00:50:02.411318 | orchestrator | Wednesday 07 January 2026 00:47:56 +0000 (0:00:01.447) 0:00:18.673 ***** 2026-01-07 00:50:02.411324 | orchestrator | skipping: [testbed-manager] 2026-01-07 00:50:02.411331 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:50:02.411337 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:50:02.411343 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:50:02.411350 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:50:02.411357 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:50:02.411375 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:50:02.411381 | orchestrator | 2026-01-07 00:50:02.411388 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-07 00:50:02.411394 | orchestrator | Wednesday 07 January 2026 00:47:58 +0000 (0:00:01.835) 0:00:20.509 ***** 2026-01-07 00:50:02.411405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.411412 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.411424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.411430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.411437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.411444 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.411455 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.411462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.411474 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.411486 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.411493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.411499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.411506 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.411513 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.411520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.411530 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.411543 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.411550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.411557 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.411564 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.411571 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.411577 | orchestrator | 2026-01-07 00:50:02.411584 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-07 00:50:02.411591 | orchestrator | Wednesday 07 January 2026 00:48:04 +0000 (0:00:06.364) 0:00:26.873 ***** 2026-01-07 00:50:02.411597 | orchestrator | [WARNING]: Skipped 2026-01-07 00:50:02.411605 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-07 00:50:02.411611 | orchestrator | to this access issue: 2026-01-07 00:50:02.411618 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-07 00:50:02.411624 | orchestrator | directory 2026-01-07 00:50:02.411631 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:50:02.411638 | orchestrator | 2026-01-07 00:50:02.411645 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-07 00:50:02.411652 | orchestrator | Wednesday 07 January 2026 00:48:06 +0000 (0:00:01.279) 0:00:28.153 ***** 2026-01-07 00:50:02.411659 | orchestrator | [WARNING]: Skipped 2026-01-07 00:50:02.411666 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-07 00:50:02.411684 | orchestrator | to this access issue: 2026-01-07 00:50:02.411692 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-07 00:50:02.411698 | orchestrator | directory 2026-01-07 00:50:02.411705 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:50:02.411712 | orchestrator | 2026-01-07 00:50:02.411719 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-07 00:50:02.411731 | orchestrator | Wednesday 07 January 2026 00:48:07 +0000 (0:00:01.059) 0:00:29.213 ***** 2026-01-07 00:50:02.411738 | orchestrator | [WARNING]: Skipped 2026-01-07 00:50:02.411744 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-07 00:50:02.411751 | orchestrator | to this access issue: 2026-01-07 00:50:02.411757 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-07 00:50:02.411763 | orchestrator | directory 2026-01-07 00:50:02.411769 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:50:02.411776 | orchestrator | 2026-01-07 00:50:02.411814 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-07 00:50:02.411823 | orchestrator | Wednesday 07 January 2026 00:48:07 +0000 (0:00:00.717) 0:00:29.930 ***** 2026-01-07 00:50:02.411829 | orchestrator | [WARNING]: Skipped 2026-01-07 00:50:02.411836 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-07 00:50:02.411842 | orchestrator | to this access issue: 2026-01-07 00:50:02.411849 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-07 00:50:02.411854 | orchestrator | directory 2026-01-07 00:50:02.411861 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 00:50:02.411867 | orchestrator | 2026-01-07 00:50:02.411873 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-07 00:50:02.411884 | orchestrator | Wednesday 07 January 2026 00:48:08 +0000 (0:00:00.845) 0:00:30.775 ***** 2026-01-07 00:50:02.411891 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:50:02.411898 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:02.411904 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:02.411910 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:50:02.411917 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:02.411923 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:02.411929 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:50:02.411936 | orchestrator | 2026-01-07 00:50:02.411942 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-07 00:50:02.411948 | orchestrator | Wednesday 07 January 2026 00:48:11 +0000 (0:00:03.046) 0:00:33.821 ***** 2026-01-07 00:50:02.411955 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:50:02.411962 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:50:02.411969 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:50:02.411975 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:50:02.411981 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:50:02.411988 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:50:02.411994 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-07 00:50:02.412001 | orchestrator | 2026-01-07 00:50:02.412007 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-07 00:50:02.412013 | orchestrator | Wednesday 07 January 2026 00:48:14 +0000 (0:00:03.045) 0:00:36.866 ***** 2026-01-07 00:50:02.412020 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:02.412026 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:02.412033 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:50:02.412039 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:02.412045 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:50:02.412052 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:50:02.412058 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:02.412064 | orchestrator | 2026-01-07 00:50:02.412071 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-07 00:50:02.412082 | orchestrator | Wednesday 07 January 2026 00:48:17 +0000 (0:00:02.642) 0:00:39.509 ***** 2026-01-07 00:50:02.412090 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.412097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.412104 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.412122 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412133 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.412141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.412148 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.412164 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.412172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.412178 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412190 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.412198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.412205 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412212 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.412218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.412228 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.412235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:50:02.412241 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412378 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412394 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412403 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412410 | orchestrator | 2026-01-07 00:50:02.412418 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-07 00:50:02.412425 | orchestrator | Wednesday 07 January 2026 00:48:20 +0000 (0:00:02.879) 0:00:42.388 ***** 2026-01-07 00:50:02.412432 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:50:02.412440 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:50:02.412452 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:50:02.412459 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:50:02.412465 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:50:02.412472 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:50:02.412478 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-07 00:50:02.412485 | orchestrator | 2026-01-07 00:50:02.412492 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-07 00:50:02.412499 | orchestrator | Wednesday 07 January 2026 00:48:23 +0000 (0:00:03.414) 0:00:45.803 ***** 2026-01-07 00:50:02.412506 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:50:02.412513 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:50:02.412519 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:50:02.412526 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:50:02.412533 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:50:02.412539 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:50:02.412546 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-07 00:50:02.412554 | orchestrator | 2026-01-07 00:50:02.412561 | orchestrator | TASK [common : Check common containers] **************************************** 2026-01-07 00:50:02.412568 | orchestrator | Wednesday 07 January 2026 00:48:26 +0000 (0:00:02.549) 0:00:48.352 ***** 2026-01-07 00:50:02.412575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.412583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.412600 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.412608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.412620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412627 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.412635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412664 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412675 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.412682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-07 00:50:02.412694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412716 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412730 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412742 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412770 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:50:02.412776 | orchestrator | 2026-01-07 00:50:02.412784 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-07 00:50:02.412802 | orchestrator | Wednesday 07 January 2026 00:48:29 +0000 (0:00:03.679) 0:00:52.032 ***** 2026-01-07 00:50:02.412808 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:02.412815 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:02.412821 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:02.412827 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:02.412833 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:50:02.412839 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:50:02.412845 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:50:02.412851 | orchestrator | 2026-01-07 00:50:02.412857 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-07 00:50:02.412864 | orchestrator | Wednesday 07 January 2026 00:48:31 +0000 (0:00:01.683) 0:00:53.715 ***** 2026-01-07 00:50:02.412870 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:02.412877 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:02.412884 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:02.412891 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:02.412897 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:50:02.412904 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:50:02.412911 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:50:02.412918 | orchestrator | 2026-01-07 00:50:02.412924 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:50:02.412931 | orchestrator | Wednesday 07 January 2026 00:48:32 +0000 (0:00:01.321) 0:00:55.037 ***** 2026-01-07 00:50:02.412937 | orchestrator | 2026-01-07 00:50:02.412944 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:50:02.412950 | orchestrator | Wednesday 07 January 2026 00:48:32 +0000 (0:00:00.066) 0:00:55.104 ***** 2026-01-07 00:50:02.412957 | orchestrator | 2026-01-07 00:50:02.412963 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:50:02.412970 | orchestrator | Wednesday 07 January 2026 00:48:33 +0000 (0:00:00.063) 0:00:55.167 ***** 2026-01-07 00:50:02.412977 | orchestrator | 2026-01-07 00:50:02.412984 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:50:02.412990 | orchestrator | Wednesday 07 January 2026 00:48:33 +0000 (0:00:00.232) 0:00:55.399 ***** 2026-01-07 00:50:02.412997 | orchestrator | 2026-01-07 00:50:02.413003 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:50:02.413016 | orchestrator | Wednesday 07 January 2026 00:48:33 +0000 (0:00:00.068) 0:00:55.468 ***** 2026-01-07 00:50:02.413022 | orchestrator | 2026-01-07 00:50:02.413029 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:50:02.413036 | orchestrator | Wednesday 07 January 2026 00:48:33 +0000 (0:00:00.065) 0:00:55.533 ***** 2026-01-07 00:50:02.413043 | orchestrator | 2026-01-07 00:50:02.413050 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-07 00:50:02.413057 | orchestrator | Wednesday 07 January 2026 00:48:33 +0000 (0:00:00.072) 0:00:55.606 ***** 2026-01-07 00:50:02.413063 | orchestrator | 2026-01-07 00:50:02.413070 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-07 00:50:02.413078 | orchestrator | Wednesday 07 January 2026 00:48:33 +0000 (0:00:00.093) 0:00:55.699 ***** 2026-01-07 00:50:02.413089 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:02.413096 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:02.413102 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:02.413109 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:50:02.413115 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:02.413122 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:50:02.413129 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:50:02.413136 | orchestrator | 2026-01-07 00:50:02.413143 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-07 00:50:02.413150 | orchestrator | Wednesday 07 January 2026 00:49:06 +0000 (0:00:33.343) 0:01:29.042 ***** 2026-01-07 00:50:02.413157 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:02.413164 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:50:02.413175 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:50:02.413183 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:02.413190 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:02.413198 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:02.413206 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:50:02.413214 | orchestrator | 2026-01-07 00:50:02.413222 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-07 00:50:02.413229 | orchestrator | Wednesday 07 January 2026 00:49:48 +0000 (0:00:41.818) 0:02:10.861 ***** 2026-01-07 00:50:02.413237 | orchestrator | ok: [testbed-manager] 2026-01-07 00:50:02.413245 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:50:02.413251 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:50:02.413258 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:50:02.413265 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:50:02.413272 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:50:02.413278 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:50:02.413285 | orchestrator | 2026-01-07 00:50:02.413292 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-07 00:50:02.413299 | orchestrator | Wednesday 07 January 2026 00:49:50 +0000 (0:00:02.118) 0:02:12.980 ***** 2026-01-07 00:50:02.413306 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:02.413312 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:02.413319 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:50:02.413325 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:50:02.413332 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:02.413339 | orchestrator | changed: [testbed-manager] 2026-01-07 00:50:02.413346 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:50:02.413352 | orchestrator | 2026-01-07 00:50:02.413358 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:50:02.413365 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:50:02.413372 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:50:02.413379 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:50:02.413391 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:50:02.413397 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:50:02.413404 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:50:02.413410 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-07 00:50:02.413416 | orchestrator | 2026-01-07 00:50:02.413423 | orchestrator | 2026-01-07 00:50:02.413430 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:50:02.413437 | orchestrator | Wednesday 07 January 2026 00:50:00 +0000 (0:00:09.714) 0:02:22.695 ***** 2026-01-07 00:50:02.413444 | orchestrator | =============================================================================== 2026-01-07 00:50:02.413451 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 41.82s 2026-01-07 00:50:02.413459 | orchestrator | common : Restart fluentd container ------------------------------------- 33.34s 2026-01-07 00:50:02.413465 | orchestrator | common : Restart cron container ----------------------------------------- 9.71s 2026-01-07 00:50:02.413473 | orchestrator | common : Copying over config.json files for services -------------------- 6.36s 2026-01-07 00:50:02.413480 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.62s 2026-01-07 00:50:02.413487 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.35s 2026-01-07 00:50:02.413493 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.96s 2026-01-07 00:50:02.413500 | orchestrator | common : Check common containers ---------------------------------------- 3.68s 2026-01-07 00:50:02.413506 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.41s 2026-01-07 00:50:02.413513 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.05s 2026-01-07 00:50:02.413520 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.05s 2026-01-07 00:50:02.413526 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.88s 2026-01-07 00:50:02.413533 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.64s 2026-01-07 00:50:02.413539 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.55s 2026-01-07 00:50:02.413550 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.12s 2026-01-07 00:50:02.413557 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.84s 2026-01-07 00:50:02.413564 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.77s 2026-01-07 00:50:02.413570 | orchestrator | common : Creating log volume -------------------------------------------- 1.68s 2026-01-07 00:50:02.413577 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.45s 2026-01-07 00:50:02.413583 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.32s 2026-01-07 00:50:02.413593 | orchestrator | 2026-01-07 00:50:02 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:02.413600 | orchestrator | 2026-01-07 00:50:02 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:02.413606 | orchestrator | 2026-01-07 00:50:02 | INFO  | Task 09860c2f-0d31-4a76-b033-1b38a1adaa68 is in state STARTED 2026-01-07 00:50:02.413613 | orchestrator | 2026-01-07 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:05.434562 | orchestrator | 2026-01-07 00:50:05 | INFO  | Task dcb92c6e-7b07-4183-941e-a9827e25db26 is in state STARTED 2026-01-07 00:50:05.437438 | orchestrator | 2026-01-07 00:50:05 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:05.437928 | orchestrator | 2026-01-07 00:50:05 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:05.438559 | orchestrator | 2026-01-07 00:50:05 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:05.439073 | orchestrator | 2026-01-07 00:50:05 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:05.441046 | orchestrator | 2026-01-07 00:50:05 | INFO  | Task 09860c2f-0d31-4a76-b033-1b38a1adaa68 is in state STARTED 2026-01-07 00:50:05.441079 | orchestrator | 2026-01-07 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:08.461875 | orchestrator | 2026-01-07 00:50:08 | INFO  | Task dcb92c6e-7b07-4183-941e-a9827e25db26 is in state STARTED 2026-01-07 00:50:08.461941 | orchestrator | 2026-01-07 00:50:08 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:08.461951 | orchestrator | 2026-01-07 00:50:08 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:08.462112 | orchestrator | 2026-01-07 00:50:08 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:08.462668 | orchestrator | 2026-01-07 00:50:08 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:08.463101 | orchestrator | 2026-01-07 00:50:08 | INFO  | Task 09860c2f-0d31-4a76-b033-1b38a1adaa68 is in state STARTED 2026-01-07 00:50:08.463123 | orchestrator | 2026-01-07 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:11.490322 | orchestrator | 2026-01-07 00:50:11 | INFO  | Task dcb92c6e-7b07-4183-941e-a9827e25db26 is in state STARTED 2026-01-07 00:50:11.490378 | orchestrator | 2026-01-07 00:50:11 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:11.490384 | orchestrator | 2026-01-07 00:50:11 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:11.490388 | orchestrator | 2026-01-07 00:50:11 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:11.490392 | orchestrator | 2026-01-07 00:50:11 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:11.490396 | orchestrator | 2026-01-07 00:50:11 | INFO  | Task 09860c2f-0d31-4a76-b033-1b38a1adaa68 is in state STARTED 2026-01-07 00:50:11.490400 | orchestrator | 2026-01-07 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:14.550488 | orchestrator | 2026-01-07 00:50:14 | INFO  | Task dcb92c6e-7b07-4183-941e-a9827e25db26 is in state STARTED 2026-01-07 00:50:14.550579 | orchestrator | 2026-01-07 00:50:14 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:14.550590 | orchestrator | 2026-01-07 00:50:14 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:14.550599 | orchestrator | 2026-01-07 00:50:14 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:14.550608 | orchestrator | 2026-01-07 00:50:14 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:14.550616 | orchestrator | 2026-01-07 00:50:14 | INFO  | Task 09860c2f-0d31-4a76-b033-1b38a1adaa68 is in state STARTED 2026-01-07 00:50:14.550626 | orchestrator | 2026-01-07 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:17.550097 | orchestrator | 2026-01-07 00:50:17 | INFO  | Task dcb92c6e-7b07-4183-941e-a9827e25db26 is in state STARTED 2026-01-07 00:50:17.550291 | orchestrator | 2026-01-07 00:50:17 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:17.551070 | orchestrator | 2026-01-07 00:50:17 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:17.551411 | orchestrator | 2026-01-07 00:50:17 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:17.552121 | orchestrator | 2026-01-07 00:50:17 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:17.552559 | orchestrator | 2026-01-07 00:50:17 | INFO  | Task 09860c2f-0d31-4a76-b033-1b38a1adaa68 is in state STARTED 2026-01-07 00:50:17.552586 | orchestrator | 2026-01-07 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:20.594370 | orchestrator | 2026-01-07 00:50:20 | INFO  | Task dcb92c6e-7b07-4183-941e-a9827e25db26 is in state STARTED 2026-01-07 00:50:20.607711 | orchestrator | 2026-01-07 00:50:20 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:20.627455 | orchestrator | 2026-01-07 00:50:20 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:20.627546 | orchestrator | 2026-01-07 00:50:20 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:20.627557 | orchestrator | 2026-01-07 00:50:20 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:20.627564 | orchestrator | 2026-01-07 00:50:20 | INFO  | Task 09860c2f-0d31-4a76-b033-1b38a1adaa68 is in state STARTED 2026-01-07 00:50:20.627572 | orchestrator | 2026-01-07 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:23.656471 | orchestrator | 2026-01-07 00:50:23 | INFO  | Task dcb92c6e-7b07-4183-941e-a9827e25db26 is in state SUCCESS 2026-01-07 00:50:23.657800 | orchestrator | 2026-01-07 00:50:23 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:23.661876 | orchestrator | 2026-01-07 00:50:23 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:23.663667 | orchestrator | 2026-01-07 00:50:23 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:23.664674 | orchestrator | 2026-01-07 00:50:23 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:23.666712 | orchestrator | 2026-01-07 00:50:23 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:50:23.668670 | orchestrator | 2026-01-07 00:50:23 | INFO  | Task 09860c2f-0d31-4a76-b033-1b38a1adaa68 is in state STARTED 2026-01-07 00:50:23.668706 | orchestrator | 2026-01-07 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:26.745050 | orchestrator | 2026-01-07 00:50:26 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:26.745542 | orchestrator | 2026-01-07 00:50:26 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:26.746292 | orchestrator | 2026-01-07 00:50:26 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:26.746993 | orchestrator | 2026-01-07 00:50:26 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:26.747915 | orchestrator | 2026-01-07 00:50:26 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:50:26.748490 | orchestrator | 2026-01-07 00:50:26 | INFO  | Task 09860c2f-0d31-4a76-b033-1b38a1adaa68 is in state STARTED 2026-01-07 00:50:26.748719 | orchestrator | 2026-01-07 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:30.135212 | orchestrator | 2026-01-07 00:50:30 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:30.135270 | orchestrator | 2026-01-07 00:50:30 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:30.135652 | orchestrator | 2026-01-07 00:50:30 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:30.138987 | orchestrator | 2026-01-07 00:50:30 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:30.139647 | orchestrator | 2026-01-07 00:50:30 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:50:30.141348 | orchestrator | 2026-01-07 00:50:30 | INFO  | Task 09860c2f-0d31-4a76-b033-1b38a1adaa68 is in state STARTED 2026-01-07 00:50:30.141383 | orchestrator | 2026-01-07 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:33.192195 | orchestrator | 2026-01-07 00:50:33 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:33.192381 | orchestrator | 2026-01-07 00:50:33 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:33.193020 | orchestrator | 2026-01-07 00:50:33 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:33.193548 | orchestrator | 2026-01-07 00:50:33 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:33.194207 | orchestrator | 2026-01-07 00:50:33 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:50:33.194892 | orchestrator | 2026-01-07 00:50:33 | INFO  | Task 09860c2f-0d31-4a76-b033-1b38a1adaa68 is in state SUCCESS 2026-01-07 00:50:33.195774 | orchestrator | 2026-01-07 00:50:33.195798 | orchestrator | 2026-01-07 00:50:33.195803 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:50:33.195808 | orchestrator | 2026-01-07 00:50:33.195811 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:50:33.195919 | orchestrator | Wednesday 07 January 2026 00:50:08 +0000 (0:00:00.327) 0:00:00.327 ***** 2026-01-07 00:50:33.195982 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:50:33.195988 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:50:33.195991 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:50:33.195995 | orchestrator | 2026-01-07 00:50:33.195999 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:50:33.196002 | orchestrator | Wednesday 07 January 2026 00:50:08 +0000 (0:00:00.422) 0:00:00.749 ***** 2026-01-07 00:50:33.196005 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-07 00:50:33.196009 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-07 00:50:33.196012 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-07 00:50:33.196015 | orchestrator | 2026-01-07 00:50:33.196019 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-07 00:50:33.196022 | orchestrator | 2026-01-07 00:50:33.196025 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-07 00:50:33.196028 | orchestrator | Wednesday 07 January 2026 00:50:09 +0000 (0:00:00.478) 0:00:01.227 ***** 2026-01-07 00:50:33.196032 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:50:33.196048 | orchestrator | 2026-01-07 00:50:33.196052 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-07 00:50:33.196055 | orchestrator | Wednesday 07 January 2026 00:50:09 +0000 (0:00:00.502) 0:00:01.730 ***** 2026-01-07 00:50:33.196058 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-07 00:50:33.196061 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-07 00:50:33.196064 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-07 00:50:33.196085 | orchestrator | 2026-01-07 00:50:33.196088 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-07 00:50:33.196092 | orchestrator | Wednesday 07 January 2026 00:50:10 +0000 (0:00:00.675) 0:00:02.406 ***** 2026-01-07 00:50:33.196095 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-07 00:50:33.196098 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-07 00:50:33.196101 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-07 00:50:33.196104 | orchestrator | 2026-01-07 00:50:33.196107 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-01-07 00:50:33.196110 | orchestrator | Wednesday 07 January 2026 00:50:12 +0000 (0:00:02.066) 0:00:04.473 ***** 2026-01-07 00:50:33.196113 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:33.196118 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:33.196123 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:33.196128 | orchestrator | 2026-01-07 00:50:33.196133 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-07 00:50:33.196138 | orchestrator | Wednesday 07 January 2026 00:50:14 +0000 (0:00:01.980) 0:00:06.453 ***** 2026-01-07 00:50:33.196143 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:33.196147 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:33.196152 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:33.196157 | orchestrator | 2026-01-07 00:50:33.196161 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:50:33.196167 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:33.196174 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:33.196181 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:33.196184 | orchestrator | 2026-01-07 00:50:33.196188 | orchestrator | 2026-01-07 00:50:33.196192 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:50:33.196196 | orchestrator | Wednesday 07 January 2026 00:50:21 +0000 (0:00:07.394) 0:00:13.848 ***** 2026-01-07 00:50:33.196199 | orchestrator | =============================================================================== 2026-01-07 00:50:33.196203 | orchestrator | memcached : Restart memcached container --------------------------------- 7.39s 2026-01-07 00:50:33.196206 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.07s 2026-01-07 00:50:33.196210 | orchestrator | memcached : Check memcached container ----------------------------------- 1.98s 2026-01-07 00:50:33.196214 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.68s 2026-01-07 00:50:33.196218 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.50s 2026-01-07 00:50:33.196223 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2026-01-07 00:50:33.196228 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2026-01-07 00:50:33.196234 | orchestrator | 2026-01-07 00:50:33.196240 | orchestrator | 2026-01-07 00:50:33.196245 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:50:33.196251 | orchestrator | 2026-01-07 00:50:33.196262 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:50:33.196266 | orchestrator | Wednesday 07 January 2026 00:50:07 +0000 (0:00:00.223) 0:00:00.223 ***** 2026-01-07 00:50:33.196269 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:50:33.196273 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:50:33.196276 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:50:33.196280 | orchestrator | 2026-01-07 00:50:33.196284 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:50:33.196302 | orchestrator | Wednesday 07 January 2026 00:50:07 +0000 (0:00:00.288) 0:00:00.512 ***** 2026-01-07 00:50:33.196310 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-07 00:50:33.196313 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-07 00:50:33.196445 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-07 00:50:33.196452 | orchestrator | 2026-01-07 00:50:33.196456 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-07 00:50:33.196459 | orchestrator | 2026-01-07 00:50:33.196463 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-07 00:50:33.196467 | orchestrator | Wednesday 07 January 2026 00:50:08 +0000 (0:00:00.550) 0:00:01.062 ***** 2026-01-07 00:50:33.196470 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:50:33.196474 | orchestrator | 2026-01-07 00:50:33.196478 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-07 00:50:33.196481 | orchestrator | Wednesday 07 January 2026 00:50:09 +0000 (0:00:00.737) 0:00:01.800 ***** 2026-01-07 00:50:33.196487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196525 | orchestrator | 2026-01-07 00:50:33.196529 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-07 00:50:33.196532 | orchestrator | Wednesday 07 January 2026 00:50:10 +0000 (0:00:01.448) 0:00:03.248 ***** 2026-01-07 00:50:33.196536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196570 | orchestrator | 2026-01-07 00:50:33.196573 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-07 00:50:33.196577 | orchestrator | Wednesday 07 January 2026 00:50:13 +0000 (0:00:02.780) 0:00:06.029 ***** 2026-01-07 00:50:33.196580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196607 | orchestrator | 2026-01-07 00:50:33.196610 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-01-07 00:50:33.196613 | orchestrator | Wednesday 07 January 2026 00:50:16 +0000 (0:00:02.765) 0:00:08.795 ***** 2026-01-07 00:50:33.196617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-07 00:50:33.196641 | orchestrator | 2026-01-07 00:50:33.196644 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-07 00:50:33.196647 | orchestrator | Wednesday 07 January 2026 00:50:18 +0000 (0:00:01.910) 0:00:10.705 ***** 2026-01-07 00:50:33.196651 | orchestrator | 2026-01-07 00:50:33.196654 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-07 00:50:33.196657 | orchestrator | Wednesday 07 January 2026 00:50:18 +0000 (0:00:00.086) 0:00:10.792 ***** 2026-01-07 00:50:33.196660 | orchestrator | 2026-01-07 00:50:33.196663 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-07 00:50:33.196666 | orchestrator | Wednesday 07 January 2026 00:50:18 +0000 (0:00:00.064) 0:00:10.856 ***** 2026-01-07 00:50:33.196669 | orchestrator | 2026-01-07 00:50:33.196672 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-07 00:50:33.196675 | orchestrator | Wednesday 07 January 2026 00:50:18 +0000 (0:00:00.065) 0:00:10.922 ***** 2026-01-07 00:50:33.196678 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:33.196682 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:33.196685 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:33.196688 | orchestrator | 2026-01-07 00:50:33.196691 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-07 00:50:33.196694 | orchestrator | Wednesday 07 January 2026 00:50:28 +0000 (0:00:10.152) 0:00:21.075 ***** 2026-01-07 00:50:33.196697 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:50:33.196700 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:50:33.196703 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:50:33.196706 | orchestrator | 2026-01-07 00:50:33.196710 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:50:33.196713 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:33.196716 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:33.196719 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:50:33.196722 | orchestrator | 2026-01-07 00:50:33.196725 | orchestrator | 2026-01-07 00:50:33.196731 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:50:33.196735 | orchestrator | Wednesday 07 January 2026 00:50:31 +0000 (0:00:03.475) 0:00:24.550 ***** 2026-01-07 00:50:33.196738 | orchestrator | =============================================================================== 2026-01-07 00:50:33.196743 | orchestrator | redis : Restart redis container ---------------------------------------- 10.15s 2026-01-07 00:50:33.196759 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.48s 2026-01-07 00:50:33.196762 | orchestrator | redis : Copying over default config.json files -------------------------- 2.78s 2026-01-07 00:50:33.196765 | orchestrator | redis : Copying over redis config files --------------------------------- 2.77s 2026-01-07 00:50:33.196768 | orchestrator | redis : Check redis containers ------------------------------------------ 1.91s 2026-01-07 00:50:33.196771 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.45s 2026-01-07 00:50:33.196774 | orchestrator | redis : include_tasks --------------------------------------------------- 0.74s 2026-01-07 00:50:33.196777 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2026-01-07 00:50:33.196780 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-01-07 00:50:33.196783 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.22s 2026-01-07 00:50:33.196786 | orchestrator | 2026-01-07 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:36.220651 | orchestrator | 2026-01-07 00:50:36 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:36.221424 | orchestrator | 2026-01-07 00:50:36 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:36.222005 | orchestrator | 2026-01-07 00:50:36 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:36.222690 | orchestrator | 2026-01-07 00:50:36 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:36.224136 | orchestrator | 2026-01-07 00:50:36 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:50:36.224167 | orchestrator | 2026-01-07 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:39.261759 | orchestrator | 2026-01-07 00:50:39 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:39.262048 | orchestrator | 2026-01-07 00:50:39 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:39.262577 | orchestrator | 2026-01-07 00:50:39 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:39.263293 | orchestrator | 2026-01-07 00:50:39 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:39.264541 | orchestrator | 2026-01-07 00:50:39 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:50:39.264575 | orchestrator | 2026-01-07 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:42.284713 | orchestrator | 2026-01-07 00:50:42 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:42.285876 | orchestrator | 2026-01-07 00:50:42 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:42.287235 | orchestrator | 2026-01-07 00:50:42 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:42.288965 | orchestrator | 2026-01-07 00:50:42 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:42.291159 | orchestrator | 2026-01-07 00:50:42 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:50:42.291199 | orchestrator | 2026-01-07 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:45.427783 | orchestrator | 2026-01-07 00:50:45 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:45.432195 | orchestrator | 2026-01-07 00:50:45 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:45.435197 | orchestrator | 2026-01-07 00:50:45 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:45.436188 | orchestrator | 2026-01-07 00:50:45 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:45.439690 | orchestrator | 2026-01-07 00:50:45 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:50:45.439806 | orchestrator | 2026-01-07 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:48.469860 | orchestrator | 2026-01-07 00:50:48 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:48.470383 | orchestrator | 2026-01-07 00:50:48 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:48.471236 | orchestrator | 2026-01-07 00:50:48 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:48.472782 | orchestrator | 2026-01-07 00:50:48 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:48.473397 | orchestrator | 2026-01-07 00:50:48 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:50:48.473597 | orchestrator | 2026-01-07 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:51.818807 | orchestrator | 2026-01-07 00:50:51 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:51.819316 | orchestrator | 2026-01-07 00:50:51 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:51.820073 | orchestrator | 2026-01-07 00:50:51 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:51.820965 | orchestrator | 2026-01-07 00:50:51 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:51.821824 | orchestrator | 2026-01-07 00:50:51 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:50:51.821861 | orchestrator | 2026-01-07 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:54.862209 | orchestrator | 2026-01-07 00:50:54 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:54.863430 | orchestrator | 2026-01-07 00:50:54 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:54.864084 | orchestrator | 2026-01-07 00:50:54 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:54.864942 | orchestrator | 2026-01-07 00:50:54 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:54.865797 | orchestrator | 2026-01-07 00:50:54 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:50:54.865829 | orchestrator | 2026-01-07 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:50:57.916429 | orchestrator | 2026-01-07 00:50:57 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:50:57.918724 | orchestrator | 2026-01-07 00:50:57 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:50:57.919252 | orchestrator | 2026-01-07 00:50:57 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:50:57.920762 | orchestrator | 2026-01-07 00:50:57 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:50:57.921389 | orchestrator | 2026-01-07 00:50:57 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:50:57.921468 | orchestrator | 2026-01-07 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:01.063024 | orchestrator | 2026-01-07 00:51:01 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:51:01.063513 | orchestrator | 2026-01-07 00:51:01 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:01.064169 | orchestrator | 2026-01-07 00:51:01 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:01.065854 | orchestrator | 2026-01-07 00:51:01 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:01.067785 | orchestrator | 2026-01-07 00:51:01 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:01.067819 | orchestrator | 2026-01-07 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:04.112329 | orchestrator | 2026-01-07 00:51:04 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:51:04.112746 | orchestrator | 2026-01-07 00:51:04 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:04.113457 | orchestrator | 2026-01-07 00:51:04 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:04.114005 | orchestrator | 2026-01-07 00:51:04 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:04.114854 | orchestrator | 2026-01-07 00:51:04 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:04.114885 | orchestrator | 2026-01-07 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:07.140431 | orchestrator | 2026-01-07 00:51:07 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state STARTED 2026-01-07 00:51:07.142936 | orchestrator | 2026-01-07 00:51:07 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:07.145383 | orchestrator | 2026-01-07 00:51:07 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:07.147526 | orchestrator | 2026-01-07 00:51:07 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:07.149654 | orchestrator | 2026-01-07 00:51:07 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:07.150481 | orchestrator | 2026-01-07 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:10.175946 | orchestrator | 2026-01-07 00:51:10 | INFO  | Task 987922cc-1a69-4090-a479-1fb71c7cbc16 is in state SUCCESS 2026-01-07 00:51:10.177276 | orchestrator | 2026-01-07 00:51:10.177359 | orchestrator | 2026-01-07 00:51:10.177368 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:51:10.177374 | orchestrator | 2026-01-07 00:51:10.177378 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:51:10.177384 | orchestrator | Wednesday 07 January 2026 00:50:06 +0000 (0:00:00.202) 0:00:00.202 ***** 2026-01-07 00:51:10.177389 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:10.177394 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:10.177399 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:10.177404 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:51:10.177409 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:51:10.177413 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:51:10.177418 | orchestrator | 2026-01-07 00:51:10.177422 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:51:10.177427 | orchestrator | Wednesday 07 January 2026 00:50:07 +0000 (0:00:00.667) 0:00:00.869 ***** 2026-01-07 00:51:10.177432 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:51:10.177438 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:51:10.177443 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:51:10.177460 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:51:10.177465 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:51:10.177470 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-07 00:51:10.177475 | orchestrator | 2026-01-07 00:51:10.177479 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-07 00:51:10.177484 | orchestrator | 2026-01-07 00:51:10.177494 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-07 00:51:10.177499 | orchestrator | Wednesday 07 January 2026 00:50:08 +0000 (0:00:00.727) 0:00:01.597 ***** 2026-01-07 00:51:10.177505 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:51:10.177512 | orchestrator | 2026-01-07 00:51:10.177517 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-07 00:51:10.177523 | orchestrator | Wednesday 07 January 2026 00:50:09 +0000 (0:00:01.445) 0:00:03.043 ***** 2026-01-07 00:51:10.177528 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-07 00:51:10.177533 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-07 00:51:10.177538 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-07 00:51:10.177544 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-07 00:51:10.177549 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-07 00:51:10.177553 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-07 00:51:10.177559 | orchestrator | 2026-01-07 00:51:10.177564 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-07 00:51:10.177569 | orchestrator | Wednesday 07 January 2026 00:50:11 +0000 (0:00:01.339) 0:00:04.383 ***** 2026-01-07 00:51:10.177574 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-07 00:51:10.177580 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-07 00:51:10.177585 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-07 00:51:10.177590 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-07 00:51:10.177596 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-07 00:51:10.177601 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-07 00:51:10.177607 | orchestrator | 2026-01-07 00:51:10.177612 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-07 00:51:10.177617 | orchestrator | Wednesday 07 January 2026 00:50:12 +0000 (0:00:01.893) 0:00:06.276 ***** 2026-01-07 00:51:10.177686 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-07 00:51:10.177693 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:10.177699 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-07 00:51:10.177705 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:10.177710 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-07 00:51:10.177716 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:10.177721 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-07 00:51:10.177727 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:51:10.177732 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-07 00:51:10.177738 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:51:10.177743 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-07 00:51:10.177748 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:51:10.177753 | orchestrator | 2026-01-07 00:51:10.177759 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-07 00:51:10.177764 | orchestrator | Wednesday 07 January 2026 00:50:14 +0000 (0:00:01.335) 0:00:07.611 ***** 2026-01-07 00:51:10.177769 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:10.177782 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:10.177788 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:10.177793 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:51:10.177799 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:51:10.177805 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:51:10.177810 | orchestrator | 2026-01-07 00:51:10.177815 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-07 00:51:10.177821 | orchestrator | Wednesday 07 January 2026 00:50:15 +0000 (0:00:01.060) 0:00:08.672 ***** 2026-01-07 00:51:10.177863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.177871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.177881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.177887 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.177893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.177906 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.177912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.177921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.177927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.177933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.177939 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.177952 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.177958 | orchestrator | 2026-01-07 00:51:10.177964 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-07 00:51:10.177970 | orchestrator | Wednesday 07 January 2026 00:50:16 +0000 (0:00:01.489) 0:00:10.162 ***** 2026-01-07 00:51:10.177976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.177984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178102 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178113 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178124 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178156 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178173 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178180 | orchestrator | 2026-01-07 00:51:10.178186 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-07 00:51:10.178192 | orchestrator | Wednesday 07 January 2026 00:50:19 +0000 (0:00:03.112) 0:00:13.275 ***** 2026-01-07 00:51:10.178198 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:51:10.178205 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:51:10.178211 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:51:10.178217 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:51:10.178223 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:51:10.178229 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:51:10.178235 | orchestrator | 2026-01-07 00:51:10.178241 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-01-07 00:51:10.178250 | orchestrator | Wednesday 07 January 2026 00:50:21 +0000 (0:00:01.179) 0:00:14.454 ***** 2026-01-07 00:51:10.178256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178282 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178289 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178321 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178331 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-07 00:51:10.178352 | orchestrator | 2026-01-07 00:51:10.178359 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:51:10.178368 | orchestrator | Wednesday 07 January 2026 00:50:23 +0000 (0:00:02.050) 0:00:16.504 ***** 2026-01-07 00:51:10.178375 | orchestrator | 2026-01-07 00:51:10.178381 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:51:10.178386 | orchestrator | Wednesday 07 January 2026 00:50:23 +0000 (0:00:00.329) 0:00:16.834 ***** 2026-01-07 00:51:10.178392 | orchestrator | 2026-01-07 00:51:10.178399 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:51:10.178404 | orchestrator | Wednesday 07 January 2026 00:50:23 +0000 (0:00:00.153) 0:00:16.987 ***** 2026-01-07 00:51:10.178410 | orchestrator | 2026-01-07 00:51:10.178415 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:51:10.178420 | orchestrator | Wednesday 07 January 2026 00:50:23 +0000 (0:00:00.251) 0:00:17.238 ***** 2026-01-07 00:51:10.178426 | orchestrator | 2026-01-07 00:51:10.178431 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:51:10.178436 | orchestrator | Wednesday 07 January 2026 00:50:24 +0000 (0:00:00.288) 0:00:17.526 ***** 2026-01-07 00:51:10.178442 | orchestrator | 2026-01-07 00:51:10.178447 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-07 00:51:10.178453 | orchestrator | Wednesday 07 January 2026 00:50:24 +0000 (0:00:00.137) 0:00:17.664 ***** 2026-01-07 00:51:10.178459 | orchestrator | 2026-01-07 00:51:10.178464 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-07 00:51:10.178470 | orchestrator | Wednesday 07 January 2026 00:50:24 +0000 (0:00:00.185) 0:00:17.850 ***** 2026-01-07 00:51:10.178475 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:10.178481 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:10.178486 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:51:10.178492 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:10.178498 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:51:10.178503 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:51:10.178508 | orchestrator | 2026-01-07 00:51:10.178514 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-07 00:51:10.178520 | orchestrator | Wednesday 07 January 2026 00:50:34 +0000 (0:00:10.382) 0:00:28.232 ***** 2026-01-07 00:51:10.178526 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:51:10.178531 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:51:10.178538 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:51:10.178543 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:51:10.178549 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:51:10.178555 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:51:10.178560 | orchestrator | 2026-01-07 00:51:10.178566 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-07 00:51:10.178571 | orchestrator | Wednesday 07 January 2026 00:50:35 +0000 (0:00:01.008) 0:00:29.241 ***** 2026-01-07 00:51:10.178577 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:10.178582 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:10.178588 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:10.178593 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:51:10.178599 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:51:10.178604 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:51:10.178610 | orchestrator | 2026-01-07 00:51:10.178615 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-07 00:51:10.178621 | orchestrator | Wednesday 07 January 2026 00:50:45 +0000 (0:00:09.473) 0:00:38.715 ***** 2026-01-07 00:51:10.178645 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-07 00:51:10.178650 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-07 00:51:10.178655 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-07 00:51:10.178661 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-07 00:51:10.178670 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-07 00:51:10.178675 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-07 00:51:10.178680 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-07 00:51:10.178686 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-07 00:51:10.178692 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-07 00:51:10.178700 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-07 00:51:10.178705 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-07 00:51:10.178711 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-07 00:51:10.178717 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:51:10.178722 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:51:10.178732 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:51:10.178738 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:51:10.178744 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:51:10.178750 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-07 00:51:10.178755 | orchestrator | 2026-01-07 00:51:10.178761 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-07 00:51:10.178767 | orchestrator | Wednesday 07 January 2026 00:50:52 +0000 (0:00:06.839) 0:00:45.554 ***** 2026-01-07 00:51:10.178772 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-07 00:51:10.178778 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:51:10.178784 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-07 00:51:10.178790 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:51:10.178796 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-07 00:51:10.178801 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:51:10.178807 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-07 00:51:10.178813 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-07 00:51:10.178818 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-07 00:51:10.178824 | orchestrator | 2026-01-07 00:51:10.178829 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-07 00:51:10.178834 | orchestrator | Wednesday 07 January 2026 00:50:54 +0000 (0:00:02.274) 0:00:47.828 ***** 2026-01-07 00:51:10.178840 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-07 00:51:10.178845 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:51:10.178851 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-07 00:51:10.178856 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:51:10.178862 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-07 00:51:10.178867 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:51:10.178873 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-07 00:51:10.178879 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-07 00:51:10.178885 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-07 00:51:10.178894 | orchestrator | 2026-01-07 00:51:10.178900 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-07 00:51:10.178906 | orchestrator | Wednesday 07 January 2026 00:50:58 +0000 (0:00:03.919) 0:00:51.748 ***** 2026-01-07 00:51:10.178911 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:51:10.178917 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:51:10.178922 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:51:10.178928 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:51:10.178933 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:51:10.178939 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:51:10.178944 | orchestrator | 2026-01-07 00:51:10.178950 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:51:10.178956 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-07 00:51:10.178967 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-07 00:51:10.178973 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-07 00:51:10.178978 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 00:51:10.178984 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 00:51:10.178989 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 00:51:10.178995 | orchestrator | 2026-01-07 00:51:10.179000 | orchestrator | 2026-01-07 00:51:10.179005 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:51:10.179011 | orchestrator | Wednesday 07 January 2026 00:51:07 +0000 (0:00:08.818) 0:01:00.566 ***** 2026-01-07 00:51:10.179017 | orchestrator | =============================================================================== 2026-01-07 00:51:10.179022 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.29s 2026-01-07 00:51:10.179027 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.38s 2026-01-07 00:51:10.179033 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.84s 2026-01-07 00:51:10.179042 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.92s 2026-01-07 00:51:10.179048 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.11s 2026-01-07 00:51:10.179053 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.27s 2026-01-07 00:51:10.179059 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.05s 2026-01-07 00:51:10.179065 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.89s 2026-01-07 00:51:10.179070 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.49s 2026-01-07 00:51:10.179076 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.45s 2026-01-07 00:51:10.179081 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.35s 2026-01-07 00:51:10.179088 | orchestrator | module-load : Load modules ---------------------------------------------- 1.34s 2026-01-07 00:51:10.179094 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.34s 2026-01-07 00:51:10.179099 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.18s 2026-01-07 00:51:10.179105 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.06s 2026-01-07 00:51:10.179111 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.01s 2026-01-07 00:51:10.179120 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2026-01-07 00:51:10.179126 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.67s 2026-01-07 00:51:10.179131 | orchestrator | 2026-01-07 00:51:10 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:10.179137 | orchestrator | 2026-01-07 00:51:10 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:10.179143 | orchestrator | 2026-01-07 00:51:10 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:10.179245 | orchestrator | 2026-01-07 00:51:10 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:10.179407 | orchestrator | 2026-01-07 00:51:10 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:51:10.179419 | orchestrator | 2026-01-07 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:13.226063 | orchestrator | 2026-01-07 00:51:13 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:13.226956 | orchestrator | 2026-01-07 00:51:13 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:13.228116 | orchestrator | 2026-01-07 00:51:13 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:13.229478 | orchestrator | 2026-01-07 00:51:13 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:13.233317 | orchestrator | 2026-01-07 00:51:13 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:51:13.233341 | orchestrator | 2026-01-07 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:16.263842 | orchestrator | 2026-01-07 00:51:16 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:16.263971 | orchestrator | 2026-01-07 00:51:16 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:16.264685 | orchestrator | 2026-01-07 00:51:16 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:16.265456 | orchestrator | 2026-01-07 00:51:16 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:16.266161 | orchestrator | 2026-01-07 00:51:16 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:51:16.266189 | orchestrator | 2026-01-07 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:19.286696 | orchestrator | 2026-01-07 00:51:19 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:19.287231 | orchestrator | 2026-01-07 00:51:19 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:19.287945 | orchestrator | 2026-01-07 00:51:19 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:19.288860 | orchestrator | 2026-01-07 00:51:19 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:19.289450 | orchestrator | 2026-01-07 00:51:19 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:51:19.289561 | orchestrator | 2026-01-07 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:22.319016 | orchestrator | 2026-01-07 00:51:22 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:22.319637 | orchestrator | 2026-01-07 00:51:22 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:22.320345 | orchestrator | 2026-01-07 00:51:22 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:22.321513 | orchestrator | 2026-01-07 00:51:22 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:22.322281 | orchestrator | 2026-01-07 00:51:22 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:51:22.322336 | orchestrator | 2026-01-07 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:25.350774 | orchestrator | 2026-01-07 00:51:25 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:25.351159 | orchestrator | 2026-01-07 00:51:25 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:25.352095 | orchestrator | 2026-01-07 00:51:25 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:25.353371 | orchestrator | 2026-01-07 00:51:25 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:25.353861 | orchestrator | 2026-01-07 00:51:25 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:51:25.353883 | orchestrator | 2026-01-07 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:28.390347 | orchestrator | 2026-01-07 00:51:28 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:28.393254 | orchestrator | 2026-01-07 00:51:28 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:28.395864 | orchestrator | 2026-01-07 00:51:28 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:28.398156 | orchestrator | 2026-01-07 00:51:28 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:28.400314 | orchestrator | 2026-01-07 00:51:28 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:51:28.400386 | orchestrator | 2026-01-07 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:31.452654 | orchestrator | 2026-01-07 00:51:31 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:31.453902 | orchestrator | 2026-01-07 00:51:31 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:31.457316 | orchestrator | 2026-01-07 00:51:31 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:31.458864 | orchestrator | 2026-01-07 00:51:31 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:31.460686 | orchestrator | 2026-01-07 00:51:31 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:51:31.460759 | orchestrator | 2026-01-07 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:34.511175 | orchestrator | 2026-01-07 00:51:34 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:34.511241 | orchestrator | 2026-01-07 00:51:34 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:34.511714 | orchestrator | 2026-01-07 00:51:34 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:34.514596 | orchestrator | 2026-01-07 00:51:34 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:34.516022 | orchestrator | 2026-01-07 00:51:34 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:51:34.516070 | orchestrator | 2026-01-07 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:37.560456 | orchestrator | 2026-01-07 00:51:37 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:37.561237 | orchestrator | 2026-01-07 00:51:37 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:37.564119 | orchestrator | 2026-01-07 00:51:37 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:37.564865 | orchestrator | 2026-01-07 00:51:37 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:37.566985 | orchestrator | 2026-01-07 00:51:37 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:51:37.567034 | orchestrator | 2026-01-07 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:40.675006 | orchestrator | 2026-01-07 00:51:40 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:40.676824 | orchestrator | 2026-01-07 00:51:40 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:40.680007 | orchestrator | 2026-01-07 00:51:40 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:40.681170 | orchestrator | 2026-01-07 00:51:40 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:40.681988 | orchestrator | 2026-01-07 00:51:40 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:51:40.682145 | orchestrator | 2026-01-07 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:43.738115 | orchestrator | 2026-01-07 00:51:43 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:43.739897 | orchestrator | 2026-01-07 00:51:43 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:43.741414 | orchestrator | 2026-01-07 00:51:43 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:43.743740 | orchestrator | 2026-01-07 00:51:43 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:43.744857 | orchestrator | 2026-01-07 00:51:43 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:51:43.745058 | orchestrator | 2026-01-07 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:46.777458 | orchestrator | 2026-01-07 00:51:46 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:46.778201 | orchestrator | 2026-01-07 00:51:46 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:46.779002 | orchestrator | 2026-01-07 00:51:46 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:46.779887 | orchestrator | 2026-01-07 00:51:46 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:46.780684 | orchestrator | 2026-01-07 00:51:46 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:51:46.780713 | orchestrator | 2026-01-07 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:49.817350 | orchestrator | 2026-01-07 00:51:49 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:49.817918 | orchestrator | 2026-01-07 00:51:49 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:49.818974 | orchestrator | 2026-01-07 00:51:49 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:49.820484 | orchestrator | 2026-01-07 00:51:49 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:49.822113 | orchestrator | 2026-01-07 00:51:49 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:51:49.822554 | orchestrator | 2026-01-07 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:52.866983 | orchestrator | 2026-01-07 00:51:52 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:52.868157 | orchestrator | 2026-01-07 00:51:52 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:52.871873 | orchestrator | 2026-01-07 00:51:52 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:52.872037 | orchestrator | 2026-01-07 00:51:52 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:52.874974 | orchestrator | 2026-01-07 00:51:52 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:51:52.875138 | orchestrator | 2026-01-07 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:56.035865 | orchestrator | 2026-01-07 00:51:56 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:56.037595 | orchestrator | 2026-01-07 00:51:56 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:56.041633 | orchestrator | 2026-01-07 00:51:56 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:56.046794 | orchestrator | 2026-01-07 00:51:56 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:56.048617 | orchestrator | 2026-01-07 00:51:56 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:51:56.048659 | orchestrator | 2026-01-07 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:51:59.421923 | orchestrator | 2026-01-07 00:51:59 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:51:59.422232 | orchestrator | 2026-01-07 00:51:59 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:51:59.423280 | orchestrator | 2026-01-07 00:51:59 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:51:59.424363 | orchestrator | 2026-01-07 00:51:59 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:51:59.425090 | orchestrator | 2026-01-07 00:51:59 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:51:59.425112 | orchestrator | 2026-01-07 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:02.463725 | orchestrator | 2026-01-07 00:52:02 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state STARTED 2026-01-07 00:52:02.463984 | orchestrator | 2026-01-07 00:52:02 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:02.464866 | orchestrator | 2026-01-07 00:52:02 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:02.466667 | orchestrator | 2026-01-07 00:52:02 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:52:02.466724 | orchestrator | 2026-01-07 00:52:02 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:02.466741 | orchestrator | 2026-01-07 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:05.509959 | orchestrator | 2026-01-07 00:52:05 | INFO  | Task 96de4f9c-ebba-4ea5-b016-1e5788f4f773 is in state SUCCESS 2026-01-07 00:52:05.510817 | orchestrator | 2026-01-07 00:52:05.510849 | orchestrator | 2026-01-07 00:52:05.510856 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-07 00:52:05.510862 | orchestrator | 2026-01-07 00:52:05.510867 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-07 00:52:05.510890 | orchestrator | Wednesday 07 January 2026 00:47:38 +0000 (0:00:00.188) 0:00:00.188 ***** 2026-01-07 00:52:05.510896 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:52:05.510903 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:52:05.510909 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:52:05.510914 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.510920 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.510926 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.510931 | orchestrator | 2026-01-07 00:52:05.510937 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-07 00:52:05.510992 | orchestrator | Wednesday 07 January 2026 00:47:39 +0000 (0:00:00.677) 0:00:00.866 ***** 2026-01-07 00:52:05.511000 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:05.511006 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:05.511012 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:05.511113 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.511120 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.511125 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.511130 | orchestrator | 2026-01-07 00:52:05.511136 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-07 00:52:05.511141 | orchestrator | Wednesday 07 January 2026 00:47:39 +0000 (0:00:00.553) 0:00:01.419 ***** 2026-01-07 00:52:05.511147 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:05.511153 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:05.511158 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:05.511163 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.511169 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.511174 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.511179 | orchestrator | 2026-01-07 00:52:05.511185 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-07 00:52:05.511191 | orchestrator | Wednesday 07 January 2026 00:47:40 +0000 (0:00:00.761) 0:00:02.181 ***** 2026-01-07 00:52:05.511196 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:52:05.511201 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:52:05.511206 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.511212 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:52:05.511217 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:05.511222 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:05.511227 | orchestrator | 2026-01-07 00:52:05.511232 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-07 00:52:05.511237 | orchestrator | Wednesday 07 January 2026 00:47:43 +0000 (0:00:03.169) 0:00:05.351 ***** 2026-01-07 00:52:05.511243 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:52:05.511248 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:52:05.511253 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:52:05.511258 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.511263 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:05.511268 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:05.511272 | orchestrator | 2026-01-07 00:52:05.511277 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-07 00:52:05.511283 | orchestrator | Wednesday 07 January 2026 00:47:45 +0000 (0:00:02.093) 0:00:07.445 ***** 2026-01-07 00:52:05.511288 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:52:05.511293 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:52:05.511328 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:52:05.511334 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.511340 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:05.511345 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:05.511351 | orchestrator | 2026-01-07 00:52:05.511356 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-07 00:52:05.511362 | orchestrator | Wednesday 07 January 2026 00:47:47 +0000 (0:00:01.979) 0:00:09.425 ***** 2026-01-07 00:52:05.511367 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:05.511382 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:05.511396 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:05.511401 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.511407 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.511412 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.511417 | orchestrator | 2026-01-07 00:52:05.511423 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-07 00:52:05.511428 | orchestrator | Wednesday 07 January 2026 00:47:48 +0000 (0:00:00.949) 0:00:10.374 ***** 2026-01-07 00:52:05.511434 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:05.511439 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:05.511444 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:05.511450 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.511455 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.511460 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.511466 | orchestrator | 2026-01-07 00:52:05.511471 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-07 00:52:05.511521 | orchestrator | Wednesday 07 January 2026 00:47:49 +0000 (0:00:00.625) 0:00:10.999 ***** 2026-01-07 00:52:05.511527 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:52:05.511533 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:52:05.511538 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:05.511544 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:52:05.511549 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:52:05.511554 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:05.511560 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:52:05.511565 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:52:05.511571 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:05.511576 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:52:05.511592 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:52:05.511597 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.511603 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:52:05.511608 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:52:05.511613 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.511619 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 00:52:05.511624 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 00:52:05.511629 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.511634 | orchestrator | 2026-01-07 00:52:05.511640 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-07 00:52:05.511645 | orchestrator | Wednesday 07 January 2026 00:47:50 +0000 (0:00:00.854) 0:00:11.854 ***** 2026-01-07 00:52:05.511650 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:05.511656 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:05.511661 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:05.511666 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.511672 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.511677 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.511682 | orchestrator | 2026-01-07 00:52:05.511688 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-07 00:52:05.511693 | orchestrator | Wednesday 07 January 2026 00:47:51 +0000 (0:00:01.337) 0:00:13.191 ***** 2026-01-07 00:52:05.511699 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:52:05.511705 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:52:05.511710 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:52:05.511716 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.511725 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.511731 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.511736 | orchestrator | 2026-01-07 00:52:05.511741 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-07 00:52:05.511747 | orchestrator | Wednesday 07 January 2026 00:47:53 +0000 (0:00:01.459) 0:00:14.651 ***** 2026-01-07 00:52:05.511752 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:05.511758 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:52:05.511763 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:05.511768 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:52:05.511774 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:52:05.511779 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.511784 | orchestrator | 2026-01-07 00:52:05.511790 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-07 00:52:05.511795 | orchestrator | Wednesday 07 January 2026 00:47:59 +0000 (0:00:06.411) 0:00:21.062 ***** 2026-01-07 00:52:05.511800 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:05.511806 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:05.511811 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:05.511816 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.511822 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.511827 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.511832 | orchestrator | 2026-01-07 00:52:05.511838 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-07 00:52:05.511843 | orchestrator | Wednesday 07 January 2026 00:48:00 +0000 (0:00:01.110) 0:00:22.174 ***** 2026-01-07 00:52:05.511849 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:05.511854 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:05.511859 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:05.511865 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.511870 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.511875 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.511880 | orchestrator | 2026-01-07 00:52:05.511885 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-07 00:52:05.511894 | orchestrator | Wednesday 07 January 2026 00:48:02 +0000 (0:00:02.308) 0:00:24.483 ***** 2026-01-07 00:52:05.511942 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:05.511948 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:05.511953 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:05.511958 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.511964 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.511969 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.511974 | orchestrator | 2026-01-07 00:52:05.511980 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-07 00:52:05.511985 | orchestrator | Wednesday 07 January 2026 00:48:03 +0000 (0:00:00.646) 0:00:25.129 ***** 2026-01-07 00:52:05.511991 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-07 00:52:05.511997 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-07 00:52:05.512002 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:05.512007 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-07 00:52:05.512013 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-07 00:52:05.512018 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:05.512023 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-07 00:52:05.512029 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-07 00:52:05.512034 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:05.512039 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-07 00:52:05.512045 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-07 00:52:05.512050 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.512055 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-07 00:52:05.512065 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-07 00:52:05.512070 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.512075 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-07 00:52:05.512081 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-07 00:52:05.512086 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.512091 | orchestrator | 2026-01-07 00:52:05.512097 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-07 00:52:05.512107 | orchestrator | Wednesday 07 January 2026 00:48:04 +0000 (0:00:00.819) 0:00:25.949 ***** 2026-01-07 00:52:05.512113 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:05.512118 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:05.512122 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:05.512128 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.512132 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.512137 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.512143 | orchestrator | 2026-01-07 00:52:05.512148 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-07 00:52:05.512153 | orchestrator | Wednesday 07 January 2026 00:48:04 +0000 (0:00:00.496) 0:00:26.446 ***** 2026-01-07 00:52:05.512159 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:05.512164 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:05.512170 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:05.512174 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.512179 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.512185 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.512190 | orchestrator | 2026-01-07 00:52:05.512195 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-07 00:52:05.512200 | orchestrator | 2026-01-07 00:52:05.512206 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-07 00:52:05.512211 | orchestrator | Wednesday 07 January 2026 00:48:05 +0000 (0:00:01.072) 0:00:27.519 ***** 2026-01-07 00:52:05.512216 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.512221 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.512226 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.512231 | orchestrator | 2026-01-07 00:52:05.512237 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-07 00:52:05.512242 | orchestrator | Wednesday 07 January 2026 00:48:07 +0000 (0:00:01.634) 0:00:29.153 ***** 2026-01-07 00:52:05.512247 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.512252 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.512257 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.512263 | orchestrator | 2026-01-07 00:52:05.512268 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-07 00:52:05.512274 | orchestrator | Wednesday 07 January 2026 00:48:09 +0000 (0:00:01.600) 0:00:30.753 ***** 2026-01-07 00:52:05.512279 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.512284 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.512290 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.512295 | orchestrator | 2026-01-07 00:52:05.512300 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-07 00:52:05.512306 | orchestrator | Wednesday 07 January 2026 00:48:10 +0000 (0:00:01.107) 0:00:31.861 ***** 2026-01-07 00:52:05.512311 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.512316 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.512322 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.512327 | orchestrator | 2026-01-07 00:52:05.512332 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-07 00:52:05.512338 | orchestrator | Wednesday 07 January 2026 00:48:11 +0000 (0:00:01.070) 0:00:32.931 ***** 2026-01-07 00:52:05.512343 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.512348 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.512354 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.512359 | orchestrator | 2026-01-07 00:52:05.512368 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-07 00:52:05.512373 | orchestrator | Wednesday 07 January 2026 00:48:11 +0000 (0:00:00.595) 0:00:33.527 ***** 2026-01-07 00:52:05.512379 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.512384 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:05.512389 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:05.512395 | orchestrator | 2026-01-07 00:52:05.512400 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-07 00:52:05.512406 | orchestrator | Wednesday 07 January 2026 00:48:13 +0000 (0:00:01.701) 0:00:35.229 ***** 2026-01-07 00:52:05.512411 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:05.512416 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:05.512433 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.512439 | orchestrator | 2026-01-07 00:52:05.512444 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-07 00:52:05.512449 | orchestrator | Wednesday 07 January 2026 00:48:15 +0000 (0:00:01.504) 0:00:36.733 ***** 2026-01-07 00:52:05.512455 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:52:05.512460 | orchestrator | 2026-01-07 00:52:05.512465 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-07 00:52:05.512470 | orchestrator | Wednesday 07 January 2026 00:48:15 +0000 (0:00:00.614) 0:00:37.347 ***** 2026-01-07 00:52:05.512487 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.512493 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.512498 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.512504 | orchestrator | 2026-01-07 00:52:05.512509 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-07 00:52:05.512515 | orchestrator | Wednesday 07 January 2026 00:48:18 +0000 (0:00:02.578) 0:00:39.926 ***** 2026-01-07 00:52:05.512520 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.512525 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.512531 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.512536 | orchestrator | 2026-01-07 00:52:05.512541 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-07 00:52:05.512547 | orchestrator | Wednesday 07 January 2026 00:48:19 +0000 (0:00:01.161) 0:00:41.087 ***** 2026-01-07 00:52:05.512552 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.512558 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.512563 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.512568 | orchestrator | 2026-01-07 00:52:05.512574 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-07 00:52:05.512579 | orchestrator | Wednesday 07 January 2026 00:48:20 +0000 (0:00:01.073) 0:00:42.160 ***** 2026-01-07 00:52:05.512584 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.512590 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.512595 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.512600 | orchestrator | 2026-01-07 00:52:05.512606 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-07 00:52:05.512615 | orchestrator | Wednesday 07 January 2026 00:48:22 +0000 (0:00:01.588) 0:00:43.749 ***** 2026-01-07 00:52:05.512620 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.512626 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.512631 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.512636 | orchestrator | 2026-01-07 00:52:05.512642 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-07 00:52:05.512647 | orchestrator | Wednesday 07 January 2026 00:48:22 +0000 (0:00:00.557) 0:00:44.307 ***** 2026-01-07 00:52:05.512652 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.512658 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.512663 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.512669 | orchestrator | 2026-01-07 00:52:05.512674 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-07 00:52:05.512684 | orchestrator | Wednesday 07 January 2026 00:48:23 +0000 (0:00:00.571) 0:00:44.878 ***** 2026-01-07 00:52:05.512689 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.512694 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:05.512699 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:05.512705 | orchestrator | 2026-01-07 00:52:05.512710 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-07 00:52:05.512715 | orchestrator | Wednesday 07 January 2026 00:48:24 +0000 (0:00:01.698) 0:00:46.576 ***** 2026-01-07 00:52:05.512721 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.512726 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.512732 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.512737 | orchestrator | 2026-01-07 00:52:05.512742 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-07 00:52:05.512748 | orchestrator | Wednesday 07 January 2026 00:48:27 +0000 (0:00:02.866) 0:00:49.442 ***** 2026-01-07 00:52:05.512753 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.512758 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.512764 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.512769 | orchestrator | 2026-01-07 00:52:05.512775 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-07 00:52:05.512780 | orchestrator | Wednesday 07 January 2026 00:48:28 +0000 (0:00:00.518) 0:00:49.961 ***** 2026-01-07 00:52:05.512785 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-07 00:52:05.512791 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-07 00:52:05.512797 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-07 00:52:05.512802 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-07 00:52:05.512808 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-07 00:52:05.512813 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-07 00:52:05.512818 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-07 00:52:05.512824 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-07 00:52:05.512832 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-07 00:52:05.512837 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-07 00:52:05.512843 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-07 00:52:05.512848 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-07 00:52:05.512853 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-01-07 00:52:05.512859 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-01-07 00:52:05.512864 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-01-07 00:52:05.512869 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.512878 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.512883 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.512889 | orchestrator | 2026-01-07 00:52:05.512894 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-07 00:52:05.512899 | orchestrator | Wednesday 07 January 2026 00:49:22 +0000 (0:00:54.285) 0:01:44.246 ***** 2026-01-07 00:52:05.512905 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.512910 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.512915 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.512921 | orchestrator | 2026-01-07 00:52:05.512926 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-07 00:52:05.512934 | orchestrator | Wednesday 07 January 2026 00:49:23 +0000 (0:00:00.510) 0:01:44.757 ***** 2026-01-07 00:52:05.512939 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.512945 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:05.512950 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:05.512955 | orchestrator | 2026-01-07 00:52:05.512961 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-07 00:52:05.512966 | orchestrator | Wednesday 07 January 2026 00:49:24 +0000 (0:00:01.057) 0:01:45.815 ***** 2026-01-07 00:52:05.512971 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.512977 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:05.512982 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:05.512988 | orchestrator | 2026-01-07 00:52:05.512993 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-07 00:52:05.512998 | orchestrator | Wednesday 07 January 2026 00:49:25 +0000 (0:00:01.634) 0:01:47.450 ***** 2026-01-07 00:52:05.513004 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.513009 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:05.513014 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:05.513020 | orchestrator | 2026-01-07 00:52:05.513025 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-07 00:52:05.513029 | orchestrator | Wednesday 07 January 2026 00:49:50 +0000 (0:00:25.069) 0:02:12.519 ***** 2026-01-07 00:52:05.513035 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.513040 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.513046 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.513051 | orchestrator | 2026-01-07 00:52:05.513056 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-07 00:52:05.513062 | orchestrator | Wednesday 07 January 2026 00:49:51 +0000 (0:00:00.646) 0:02:13.166 ***** 2026-01-07 00:52:05.513067 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.513072 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.513078 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.513083 | orchestrator | 2026-01-07 00:52:05.513088 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-07 00:52:05.513094 | orchestrator | Wednesday 07 January 2026 00:49:52 +0000 (0:00:00.664) 0:02:13.830 ***** 2026-01-07 00:52:05.513099 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.513104 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:05.513109 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:05.513115 | orchestrator | 2026-01-07 00:52:05.513120 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-07 00:52:05.513125 | orchestrator | Wednesday 07 January 2026 00:49:52 +0000 (0:00:00.652) 0:02:14.483 ***** 2026-01-07 00:52:05.513131 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.513136 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.513141 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.513147 | orchestrator | 2026-01-07 00:52:05.513152 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-07 00:52:05.513157 | orchestrator | Wednesday 07 January 2026 00:49:53 +0000 (0:00:00.946) 0:02:15.429 ***** 2026-01-07 00:52:05.513163 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.513168 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.513173 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.513182 | orchestrator | 2026-01-07 00:52:05.513187 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-07 00:52:05.513193 | orchestrator | Wednesday 07 January 2026 00:49:54 +0000 (0:00:00.289) 0:02:15.719 ***** 2026-01-07 00:52:05.513198 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.513203 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:05.513209 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:05.513214 | orchestrator | 2026-01-07 00:52:05.513219 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-07 00:52:05.513225 | orchestrator | Wednesday 07 January 2026 00:49:54 +0000 (0:00:00.635) 0:02:16.354 ***** 2026-01-07 00:52:05.513230 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.513235 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:05.513241 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:05.513246 | orchestrator | 2026-01-07 00:52:05.513253 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-07 00:52:05.513258 | orchestrator | Wednesday 07 January 2026 00:49:55 +0000 (0:00:00.584) 0:02:16.939 ***** 2026-01-07 00:52:05.513264 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.513269 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:05.513274 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:05.513280 | orchestrator | 2026-01-07 00:52:05.513285 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-07 00:52:05.513290 | orchestrator | Wednesday 07 January 2026 00:49:56 +0000 (0:00:01.061) 0:02:18.001 ***** 2026-01-07 00:52:05.513295 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:05.513300 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:05.513306 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:05.513311 | orchestrator | 2026-01-07 00:52:05.513316 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-07 00:52:05.513321 | orchestrator | Wednesday 07 January 2026 00:49:57 +0000 (0:00:00.712) 0:02:18.713 ***** 2026-01-07 00:52:05.513326 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.513331 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.513336 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.513341 | orchestrator | 2026-01-07 00:52:05.513346 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-07 00:52:05.513351 | orchestrator | Wednesday 07 January 2026 00:49:57 +0000 (0:00:00.266) 0:02:18.979 ***** 2026-01-07 00:52:05.513357 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.513362 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.513367 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.513372 | orchestrator | 2026-01-07 00:52:05.513377 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-07 00:52:05.513382 | orchestrator | Wednesday 07 January 2026 00:49:57 +0000 (0:00:00.269) 0:02:19.249 ***** 2026-01-07 00:52:05.513388 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.513393 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.513398 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.513404 | orchestrator | 2026-01-07 00:52:05.513408 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-07 00:52:05.513414 | orchestrator | Wednesday 07 January 2026 00:49:58 +0000 (0:00:00.757) 0:02:20.006 ***** 2026-01-07 00:52:05.513419 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.513428 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.513433 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.513438 | orchestrator | 2026-01-07 00:52:05.513443 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-07 00:52:05.513448 | orchestrator | Wednesday 07 January 2026 00:49:58 +0000 (0:00:00.554) 0:02:20.560 ***** 2026-01-07 00:52:05.513454 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-07 00:52:05.513459 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-07 00:52:05.513468 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-07 00:52:05.513484 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-07 00:52:05.513490 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-07 00:52:05.513495 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-07 00:52:05.513501 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-07 00:52:05.513507 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-07 00:52:05.513513 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-07 00:52:05.513519 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-07 00:52:05.513525 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-07 00:52:05.513530 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-07 00:52:05.513536 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-07 00:52:05.513542 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-07 00:52:05.513548 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-07 00:52:05.513553 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-07 00:52:05.513558 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-07 00:52:05.513564 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-07 00:52:05.513570 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-07 00:52:05.513576 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-07 00:52:05.513582 | orchestrator | 2026-01-07 00:52:05.513587 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-07 00:52:05.513593 | orchestrator | 2026-01-07 00:52:05.513598 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-07 00:52:05.513604 | orchestrator | Wednesday 07 January 2026 00:50:01 +0000 (0:00:02.702) 0:02:23.263 ***** 2026-01-07 00:52:05.513609 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:52:05.513615 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:52:05.513624 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:52:05.513629 | orchestrator | 2026-01-07 00:52:05.513634 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-07 00:52:05.513640 | orchestrator | Wednesday 07 January 2026 00:50:02 +0000 (0:00:00.617) 0:02:23.881 ***** 2026-01-07 00:52:05.513645 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:52:05.513651 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:52:05.513657 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:52:05.513662 | orchestrator | 2026-01-07 00:52:05.513668 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-07 00:52:05.513673 | orchestrator | Wednesday 07 January 2026 00:50:02 +0000 (0:00:00.641) 0:02:24.523 ***** 2026-01-07 00:52:05.513679 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:52:05.513684 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:52:05.513690 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:52:05.513696 | orchestrator | 2026-01-07 00:52:05.513702 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-07 00:52:05.513708 | orchestrator | Wednesday 07 January 2026 00:50:03 +0000 (0:00:00.452) 0:02:24.975 ***** 2026-01-07 00:52:05.513714 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:52:05.513723 | orchestrator | 2026-01-07 00:52:05.513728 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-07 00:52:05.513733 | orchestrator | Wednesday 07 January 2026 00:50:04 +0000 (0:00:00.740) 0:02:25.716 ***** 2026-01-07 00:52:05.513739 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:05.513744 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:05.513750 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:05.513756 | orchestrator | 2026-01-07 00:52:05.513761 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-07 00:52:05.513767 | orchestrator | Wednesday 07 January 2026 00:50:04 +0000 (0:00:00.300) 0:02:26.016 ***** 2026-01-07 00:52:05.513773 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:05.513779 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:05.513785 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:05.513791 | orchestrator | 2026-01-07 00:52:05.513796 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-07 00:52:05.513806 | orchestrator | Wednesday 07 January 2026 00:50:04 +0000 (0:00:00.319) 0:02:26.336 ***** 2026-01-07 00:52:05.513812 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:05.513818 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:05.513824 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:05.513829 | orchestrator | 2026-01-07 00:52:05.513835 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-07 00:52:05.513841 | orchestrator | Wednesday 07 January 2026 00:50:05 +0000 (0:00:00.350) 0:02:26.686 ***** 2026-01-07 00:52:05.513847 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:52:05.513852 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:52:05.513858 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:52:05.513864 | orchestrator | 2026-01-07 00:52:05.513870 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-07 00:52:05.513876 | orchestrator | Wednesday 07 January 2026 00:50:05 +0000 (0:00:00.727) 0:02:27.414 ***** 2026-01-07 00:52:05.513881 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:52:05.513887 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:52:05.513893 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:52:05.513899 | orchestrator | 2026-01-07 00:52:05.513905 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-07 00:52:05.513910 | orchestrator | Wednesday 07 January 2026 00:50:06 +0000 (0:00:00.976) 0:02:28.390 ***** 2026-01-07 00:52:05.513916 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:52:05.513922 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:52:05.513928 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:52:05.513933 | orchestrator | 2026-01-07 00:52:05.513939 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-07 00:52:05.513945 | orchestrator | Wednesday 07 January 2026 00:50:07 +0000 (0:00:01.115) 0:02:29.505 ***** 2026-01-07 00:52:05.513951 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:52:05.513957 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:52:05.513963 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:52:05.513968 | orchestrator | 2026-01-07 00:52:05.513974 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-07 00:52:05.513980 | orchestrator | 2026-01-07 00:52:05.513986 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-07 00:52:05.513991 | orchestrator | Wednesday 07 January 2026 00:50:17 +0000 (0:00:09.758) 0:02:39.264 ***** 2026-01-07 00:52:05.513997 | orchestrator | ok: [testbed-manager] 2026-01-07 00:52:05.514003 | orchestrator | 2026-01-07 00:52:05.514009 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-07 00:52:05.514042 | orchestrator | Wednesday 07 January 2026 00:50:18 +0000 (0:00:00.752) 0:02:40.016 ***** 2026-01-07 00:52:05.514048 | orchestrator | changed: [testbed-manager] 2026-01-07 00:52:05.514054 | orchestrator | 2026-01-07 00:52:05.514060 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-07 00:52:05.514070 | orchestrator | Wednesday 07 January 2026 00:50:18 +0000 (0:00:00.427) 0:02:40.444 ***** 2026-01-07 00:52:05.514076 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-07 00:52:05.514082 | orchestrator | 2026-01-07 00:52:05.514087 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-07 00:52:05.514093 | orchestrator | Wednesday 07 January 2026 00:50:19 +0000 (0:00:00.679) 0:02:41.124 ***** 2026-01-07 00:52:05.514098 | orchestrator | changed: [testbed-manager] 2026-01-07 00:52:05.514104 | orchestrator | 2026-01-07 00:52:05.514110 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-07 00:52:05.514116 | orchestrator | Wednesday 07 January 2026 00:50:20 +0000 (0:00:00.857) 0:02:41.982 ***** 2026-01-07 00:52:05.514121 | orchestrator | changed: [testbed-manager] 2026-01-07 00:52:05.514126 | orchestrator | 2026-01-07 00:52:05.514132 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-07 00:52:05.514138 | orchestrator | Wednesday 07 January 2026 00:50:20 +0000 (0:00:00.548) 0:02:42.530 ***** 2026-01-07 00:52:05.514144 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-07 00:52:05.514149 | orchestrator | 2026-01-07 00:52:05.514155 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-07 00:52:05.514161 | orchestrator | Wednesday 07 January 2026 00:50:22 +0000 (0:00:01.672) 0:02:44.203 ***** 2026-01-07 00:52:05.514167 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-07 00:52:05.514172 | orchestrator | 2026-01-07 00:52:05.514178 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-07 00:52:05.514184 | orchestrator | Wednesday 07 January 2026 00:50:23 +0000 (0:00:00.925) 0:02:45.128 ***** 2026-01-07 00:52:05.514190 | orchestrator | changed: [testbed-manager] 2026-01-07 00:52:05.514196 | orchestrator | 2026-01-07 00:52:05.514202 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-07 00:52:05.514208 | orchestrator | Wednesday 07 January 2026 00:50:23 +0000 (0:00:00.434) 0:02:45.563 ***** 2026-01-07 00:52:05.514213 | orchestrator | changed: [testbed-manager] 2026-01-07 00:52:05.514219 | orchestrator | 2026-01-07 00:52:05.514225 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-07 00:52:05.514231 | orchestrator | 2026-01-07 00:52:05.514236 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-07 00:52:05.514242 | orchestrator | Wednesday 07 January 2026 00:50:24 +0000 (0:00:00.798) 0:02:46.361 ***** 2026-01-07 00:52:05.514248 | orchestrator | ok: [testbed-manager] 2026-01-07 00:52:05.514254 | orchestrator | 2026-01-07 00:52:05.514260 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-07 00:52:05.514266 | orchestrator | Wednesday 07 January 2026 00:50:24 +0000 (0:00:00.152) 0:02:46.514 ***** 2026-01-07 00:52:05.514271 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:52:05.514277 | orchestrator | 2026-01-07 00:52:05.514283 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-07 00:52:05.514289 | orchestrator | Wednesday 07 January 2026 00:50:25 +0000 (0:00:00.237) 0:02:46.751 ***** 2026-01-07 00:52:05.514295 | orchestrator | ok: [testbed-manager] 2026-01-07 00:52:05.514300 | orchestrator | 2026-01-07 00:52:05.514306 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-07 00:52:05.514312 | orchestrator | Wednesday 07 January 2026 00:50:25 +0000 (0:00:00.740) 0:02:47.492 ***** 2026-01-07 00:52:05.514321 | orchestrator | ok: [testbed-manager] 2026-01-07 00:52:05.514327 | orchestrator | 2026-01-07 00:52:05.514333 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-07 00:52:05.514339 | orchestrator | Wednesday 07 January 2026 00:50:27 +0000 (0:00:01.443) 0:02:48.935 ***** 2026-01-07 00:52:05.514345 | orchestrator | changed: [testbed-manager] 2026-01-07 00:52:05.514350 | orchestrator | 2026-01-07 00:52:05.514356 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-07 00:52:05.514367 | orchestrator | Wednesday 07 January 2026 00:50:28 +0000 (0:00:00.862) 0:02:49.797 ***** 2026-01-07 00:52:05.514373 | orchestrator | ok: [testbed-manager] 2026-01-07 00:52:05.514378 | orchestrator | 2026-01-07 00:52:05.514384 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-07 00:52:05.514390 | orchestrator | Wednesday 07 January 2026 00:50:28 +0000 (0:00:00.470) 0:02:50.268 ***** 2026-01-07 00:52:05.514396 | orchestrator | changed: [testbed-manager] 2026-01-07 00:52:05.514402 | orchestrator | 2026-01-07 00:52:05.514408 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-07 00:52:05.514414 | orchestrator | Wednesday 07 January 2026 00:50:34 +0000 (0:00:06.220) 0:02:56.488 ***** 2026-01-07 00:52:05.514419 | orchestrator | changed: [testbed-manager] 2026-01-07 00:52:05.514425 | orchestrator | 2026-01-07 00:52:05.514431 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-07 00:52:05.514437 | orchestrator | Wednesday 07 January 2026 00:50:47 +0000 (0:00:12.260) 0:03:08.748 ***** 2026-01-07 00:52:05.514443 | orchestrator | ok: [testbed-manager] 2026-01-07 00:52:05.514448 | orchestrator | 2026-01-07 00:52:05.514454 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-07 00:52:05.514460 | orchestrator | 2026-01-07 00:52:05.514466 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-07 00:52:05.514472 | orchestrator | Wednesday 07 January 2026 00:50:47 +0000 (0:00:00.475) 0:03:09.224 ***** 2026-01-07 00:52:05.514524 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.514529 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.514535 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.514540 | orchestrator | 2026-01-07 00:52:05.514545 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-07 00:52:05.514551 | orchestrator | Wednesday 07 January 2026 00:50:47 +0000 (0:00:00.300) 0:03:09.524 ***** 2026-01-07 00:52:05.514556 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.514561 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.514567 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.514572 | orchestrator | 2026-01-07 00:52:05.514577 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-07 00:52:05.514583 | orchestrator | Wednesday 07 January 2026 00:50:48 +0000 (0:00:00.257) 0:03:09.781 ***** 2026-01-07 00:52:05.514588 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:52:05.514594 | orchestrator | 2026-01-07 00:52:05.514599 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-07 00:52:05.514604 | orchestrator | Wednesday 07 January 2026 00:50:48 +0000 (0:00:00.535) 0:03:10.317 ***** 2026-01-07 00:52:05.514610 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-07 00:52:05.514615 | orchestrator | 2026-01-07 00:52:05.514621 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-07 00:52:05.514626 | orchestrator | Wednesday 07 January 2026 00:50:49 +0000 (0:00:00.728) 0:03:11.045 ***** 2026-01-07 00:52:05.514632 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:52:05.514637 | orchestrator | 2026-01-07 00:52:05.514642 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-07 00:52:05.514647 | orchestrator | Wednesday 07 January 2026 00:50:50 +0000 (0:00:01.069) 0:03:12.115 ***** 2026-01-07 00:52:05.514652 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.514657 | orchestrator | 2026-01-07 00:52:05.515018 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-07 00:52:05.515032 | orchestrator | Wednesday 07 January 2026 00:50:50 +0000 (0:00:00.093) 0:03:12.209 ***** 2026-01-07 00:52:05.515038 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:52:05.515043 | orchestrator | 2026-01-07 00:52:05.515049 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-07 00:52:05.515056 | orchestrator | Wednesday 07 January 2026 00:50:51 +0000 (0:00:00.763) 0:03:12.972 ***** 2026-01-07 00:52:05.515069 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.515074 | orchestrator | 2026-01-07 00:52:05.515079 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-07 00:52:05.515084 | orchestrator | Wednesday 07 January 2026 00:50:51 +0000 (0:00:00.098) 0:03:13.071 ***** 2026-01-07 00:52:05.515090 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.515095 | orchestrator | 2026-01-07 00:52:05.515100 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-07 00:52:05.515106 | orchestrator | Wednesday 07 January 2026 00:50:51 +0000 (0:00:00.106) 0:03:13.178 ***** 2026-01-07 00:52:05.515111 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.515116 | orchestrator | 2026-01-07 00:52:05.515122 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-07 00:52:05.515127 | orchestrator | Wednesday 07 January 2026 00:50:51 +0000 (0:00:00.120) 0:03:13.298 ***** 2026-01-07 00:52:05.515132 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.515137 | orchestrator | 2026-01-07 00:52:05.515142 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-07 00:52:05.515148 | orchestrator | Wednesday 07 January 2026 00:50:51 +0000 (0:00:00.095) 0:03:13.394 ***** 2026-01-07 00:52:05.515153 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-07 00:52:05.515158 | orchestrator | 2026-01-07 00:52:05.515164 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-07 00:52:05.515169 | orchestrator | Wednesday 07 January 2026 00:50:56 +0000 (0:00:05.047) 0:03:18.442 ***** 2026-01-07 00:52:05.515174 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-07 00:52:05.515185 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-07 00:52:05.515190 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-07 00:52:05.515196 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-07 00:52:05.515201 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-07 00:52:05.515206 | orchestrator | 2026-01-07 00:52:05.515211 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-07 00:52:05.515216 | orchestrator | Wednesday 07 January 2026 00:51:39 +0000 (0:00:42.553) 0:04:00.995 ***** 2026-01-07 00:52:05.515221 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 00:52:05.515226 | orchestrator | 2026-01-07 00:52:05.515232 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-07 00:52:05.515237 | orchestrator | Wednesday 07 January 2026 00:51:40 +0000 (0:00:01.207) 0:04:02.203 ***** 2026-01-07 00:52:05.515242 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-07 00:52:05.515247 | orchestrator | 2026-01-07 00:52:05.515252 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-07 00:52:05.515257 | orchestrator | Wednesday 07 January 2026 00:51:42 +0000 (0:00:01.647) 0:04:03.850 ***** 2026-01-07 00:52:05.515262 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-07 00:52:05.515267 | orchestrator | 2026-01-07 00:52:05.515271 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-07 00:52:05.515276 | orchestrator | Wednesday 07 January 2026 00:51:43 +0000 (0:00:01.133) 0:04:04.984 ***** 2026-01-07 00:52:05.515281 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.515286 | orchestrator | 2026-01-07 00:52:05.515356 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-07 00:52:05.515362 | orchestrator | Wednesday 07 January 2026 00:51:43 +0000 (0:00:00.117) 0:04:05.102 ***** 2026-01-07 00:52:05.515367 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-07 00:52:05.515372 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-07 00:52:05.515378 | orchestrator | 2026-01-07 00:52:05.515383 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-07 00:52:05.515392 | orchestrator | Wednesday 07 January 2026 00:51:45 +0000 (0:00:02.020) 0:04:07.122 ***** 2026-01-07 00:52:05.515397 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.515402 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.515407 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.515411 | orchestrator | 2026-01-07 00:52:05.515415 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-07 00:52:05.515420 | orchestrator | Wednesday 07 January 2026 00:51:45 +0000 (0:00:00.393) 0:04:07.516 ***** 2026-01-07 00:52:05.515425 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.515430 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.515435 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.515440 | orchestrator | 2026-01-07 00:52:05.515445 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-07 00:52:05.515450 | orchestrator | 2026-01-07 00:52:05.515455 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-07 00:52:05.515460 | orchestrator | Wednesday 07 January 2026 00:51:46 +0000 (0:00:00.980) 0:04:08.496 ***** 2026-01-07 00:52:05.515465 | orchestrator | ok: [testbed-manager] 2026-01-07 00:52:05.515470 | orchestrator | 2026-01-07 00:52:05.515527 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-07 00:52:05.515534 | orchestrator | Wednesday 07 January 2026 00:51:47 +0000 (0:00:00.129) 0:04:08.626 ***** 2026-01-07 00:52:05.515539 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-07 00:52:05.515544 | orchestrator | 2026-01-07 00:52:05.515549 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-07 00:52:05.515554 | orchestrator | Wednesday 07 January 2026 00:51:47 +0000 (0:00:00.212) 0:04:08.838 ***** 2026-01-07 00:52:05.515559 | orchestrator | changed: [testbed-manager] 2026-01-07 00:52:05.515564 | orchestrator | 2026-01-07 00:52:05.515569 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-07 00:52:05.515575 | orchestrator | 2026-01-07 00:52:05.515584 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-07 00:52:05.515589 | orchestrator | Wednesday 07 January 2026 00:51:52 +0000 (0:00:05.117) 0:04:13.955 ***** 2026-01-07 00:52:05.515595 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:52:05.515599 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:52:05.515605 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:52:05.515609 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:05.515614 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:05.515620 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:05.515626 | orchestrator | 2026-01-07 00:52:05.515631 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-07 00:52:05.515636 | orchestrator | Wednesday 07 January 2026 00:51:53 +0000 (0:00:00.728) 0:04:14.684 ***** 2026-01-07 00:52:05.515642 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-07 00:52:05.515647 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-07 00:52:05.515652 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-07 00:52:05.515657 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-07 00:52:05.515662 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-07 00:52:05.515668 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-07 00:52:05.515673 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-07 00:52:05.515679 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-07 00:52:05.515691 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-07 00:52:05.515697 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-07 00:52:05.515707 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-07 00:52:05.515713 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-07 00:52:05.515718 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-07 00:52:05.515723 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-07 00:52:05.515729 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-07 00:52:05.515734 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-07 00:52:05.515739 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-07 00:52:05.515744 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-07 00:52:05.515750 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-07 00:52:05.515755 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-07 00:52:05.515760 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-07 00:52:05.515765 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-07 00:52:05.515771 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-07 00:52:05.515777 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-07 00:52:05.515783 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-07 00:52:05.515789 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-07 00:52:05.515794 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-07 00:52:05.515800 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-07 00:52:05.515806 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-07 00:52:05.515811 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-07 00:52:05.515817 | orchestrator | 2026-01-07 00:52:05.515823 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-07 00:52:05.515828 | orchestrator | Wednesday 07 January 2026 00:52:04 +0000 (0:00:10.954) 0:04:25.639 ***** 2026-01-07 00:52:05.515834 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:05.515840 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:05.515844 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:05.515847 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.515851 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.515854 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.515857 | orchestrator | 2026-01-07 00:52:05.515860 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-07 00:52:05.515863 | orchestrator | Wednesday 07 January 2026 00:52:04 +0000 (0:00:00.631) 0:04:26.271 ***** 2026-01-07 00:52:05.515866 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:52:05.515869 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:52:05.515872 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:52:05.515875 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:05.515878 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:05.515881 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:05.515884 | orchestrator | 2026-01-07 00:52:05.515887 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:52:05.515893 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:52:05.515897 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-07 00:52:05.515904 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-07 00:52:05.515907 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-07 00:52:05.515910 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-07 00:52:05.515913 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-07 00:52:05.515916 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-07 00:52:05.515919 | orchestrator | 2026-01-07 00:52:05.515922 | orchestrator | 2026-01-07 00:52:05.515925 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:52:05.515932 | orchestrator | Wednesday 07 January 2026 00:52:05 +0000 (0:00:00.436) 0:04:26.707 ***** 2026-01-07 00:52:05.515935 | orchestrator | =============================================================================== 2026-01-07 00:52:05.515938 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.29s 2026-01-07 00:52:05.515941 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.55s 2026-01-07 00:52:05.515944 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.07s 2026-01-07 00:52:05.515948 | orchestrator | kubectl : Install required packages ------------------------------------ 12.26s 2026-01-07 00:52:05.515951 | orchestrator | Manage labels ---------------------------------------------------------- 10.95s 2026-01-07 00:52:05.515954 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.76s 2026-01-07 00:52:05.515957 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.41s 2026-01-07 00:52:05.515960 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.22s 2026-01-07 00:52:05.515963 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.12s 2026-01-07 00:52:05.515966 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.05s 2026-01-07 00:52:05.515969 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.17s 2026-01-07 00:52:05.515972 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.87s 2026-01-07 00:52:05.515975 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.70s 2026-01-07 00:52:05.515978 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.58s 2026-01-07 00:52:05.515981 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.31s 2026-01-07 00:52:05.515984 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.09s 2026-01-07 00:52:05.515987 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.02s 2026-01-07 00:52:05.515990 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.98s 2026-01-07 00:52:05.515993 | orchestrator | k3s_server : Create /etc/rancher/k3s directory -------------------------- 1.70s 2026-01-07 00:52:05.515997 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.70s 2026-01-07 00:52:05.516001 | orchestrator | 2026-01-07 00:52:05 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:05.516005 | orchestrator | 2026-01-07 00:52:05 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:05.516008 | orchestrator | 2026-01-07 00:52:05 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:52:05.516014 | orchestrator | 2026-01-07 00:52:05 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:05.516018 | orchestrator | 2026-01-07 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:08.590694 | orchestrator | 2026-01-07 00:52:08 | INFO  | Task f3e3c8a5-cd0b-4d20-abf1-cb325b9e914e is in state STARTED 2026-01-07 00:52:08.592272 | orchestrator | 2026-01-07 00:52:08 | INFO  | Task ef8e432b-300e-4567-9c2a-1fa0b290d57e is in state STARTED 2026-01-07 00:52:08.592307 | orchestrator | 2026-01-07 00:52:08 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:08.593172 | orchestrator | 2026-01-07 00:52:08 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:08.594141 | orchestrator | 2026-01-07 00:52:08 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:52:08.597310 | orchestrator | 2026-01-07 00:52:08 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:08.597357 | orchestrator | 2026-01-07 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:11.631908 | orchestrator | 2026-01-07 00:52:11 | INFO  | Task f3e3c8a5-cd0b-4d20-abf1-cb325b9e914e is in state STARTED 2026-01-07 00:52:11.631964 | orchestrator | 2026-01-07 00:52:11 | INFO  | Task ef8e432b-300e-4567-9c2a-1fa0b290d57e is in state STARTED 2026-01-07 00:52:11.632555 | orchestrator | 2026-01-07 00:52:11 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:11.633312 | orchestrator | 2026-01-07 00:52:11 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:11.634263 | orchestrator | 2026-01-07 00:52:11 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:52:11.634987 | orchestrator | 2026-01-07 00:52:11 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:11.635321 | orchestrator | 2026-01-07 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:14.664210 | orchestrator | 2026-01-07 00:52:14 | INFO  | Task f3e3c8a5-cd0b-4d20-abf1-cb325b9e914e is in state STARTED 2026-01-07 00:52:14.664401 | orchestrator | 2026-01-07 00:52:14 | INFO  | Task ef8e432b-300e-4567-9c2a-1fa0b290d57e is in state SUCCESS 2026-01-07 00:52:14.665319 | orchestrator | 2026-01-07 00:52:14 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:14.667189 | orchestrator | 2026-01-07 00:52:14 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:14.669093 | orchestrator | 2026-01-07 00:52:14 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:52:14.670817 | orchestrator | 2026-01-07 00:52:14 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:14.670870 | orchestrator | 2026-01-07 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:17.699685 | orchestrator | 2026-01-07 00:52:17 | INFO  | Task f3e3c8a5-cd0b-4d20-abf1-cb325b9e914e is in state SUCCESS 2026-01-07 00:52:17.701344 | orchestrator | 2026-01-07 00:52:17 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:17.701553 | orchestrator | 2026-01-07 00:52:17 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:17.702296 | orchestrator | 2026-01-07 00:52:17 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:52:17.702878 | orchestrator | 2026-01-07 00:52:17 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:17.702945 | orchestrator | 2026-01-07 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:20.738088 | orchestrator | 2026-01-07 00:52:20 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:20.738808 | orchestrator | 2026-01-07 00:52:20 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:20.739680 | orchestrator | 2026-01-07 00:52:20 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:52:20.741176 | orchestrator | 2026-01-07 00:52:20 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:20.741322 | orchestrator | 2026-01-07 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:23.785667 | orchestrator | 2026-01-07 00:52:23 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:23.786089 | orchestrator | 2026-01-07 00:52:23 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:23.787060 | orchestrator | 2026-01-07 00:52:23 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:52:23.788371 | orchestrator | 2026-01-07 00:52:23 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:23.788528 | orchestrator | 2026-01-07 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:26.824165 | orchestrator | 2026-01-07 00:52:26 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:26.826011 | orchestrator | 2026-01-07 00:52:26 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:26.828037 | orchestrator | 2026-01-07 00:52:26 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:52:26.830307 | orchestrator | 2026-01-07 00:52:26 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:26.830392 | orchestrator | 2026-01-07 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:29.863566 | orchestrator | 2026-01-07 00:52:29 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:29.865254 | orchestrator | 2026-01-07 00:52:29 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:29.867697 | orchestrator | 2026-01-07 00:52:29 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:52:29.869247 | orchestrator | 2026-01-07 00:52:29 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:29.869324 | orchestrator | 2026-01-07 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:32.909483 | orchestrator | 2026-01-07 00:52:32 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:32.912335 | orchestrator | 2026-01-07 00:52:32 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:32.914573 | orchestrator | 2026-01-07 00:52:32 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:52:32.917162 | orchestrator | 2026-01-07 00:52:32 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:32.917250 | orchestrator | 2026-01-07 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:35.957578 | orchestrator | 2026-01-07 00:52:35 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:35.959829 | orchestrator | 2026-01-07 00:52:35 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:35.962759 | orchestrator | 2026-01-07 00:52:35 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:52:35.962825 | orchestrator | 2026-01-07 00:52:35 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:35.962836 | orchestrator | 2026-01-07 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:39.011584 | orchestrator | 2026-01-07 00:52:39 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:39.013150 | orchestrator | 2026-01-07 00:52:39 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:39.015488 | orchestrator | 2026-01-07 00:52:39 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:52:39.019560 | orchestrator | 2026-01-07 00:52:39 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:39.019649 | orchestrator | 2026-01-07 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:42.056819 | orchestrator | 2026-01-07 00:52:42 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:42.060249 | orchestrator | 2026-01-07 00:52:42 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:42.061065 | orchestrator | 2026-01-07 00:52:42 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state STARTED 2026-01-07 00:52:42.061862 | orchestrator | 2026-01-07 00:52:42 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:42.061885 | orchestrator | 2026-01-07 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:45.102527 | orchestrator | 2026-01-07 00:52:45 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:45.104696 | orchestrator | 2026-01-07 00:52:45 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:45.106947 | orchestrator | 2026-01-07 00:52:45 | INFO  | Task 3deb5e52-bb2c-44d0-a77e-2f6135bdfb25 is in state SUCCESS 2026-01-07 00:52:45.109529 | orchestrator | 2026-01-07 00:52:45.109621 | orchestrator | 2026-01-07 00:52:45.109628 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-07 00:52:45.109634 | orchestrator | 2026-01-07 00:52:45.109639 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-07 00:52:45.109643 | orchestrator | Wednesday 07 January 2026 00:52:09 +0000 (0:00:00.146) 0:00:00.146 ***** 2026-01-07 00:52:45.109649 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-07 00:52:45.109653 | orchestrator | 2026-01-07 00:52:45.109657 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-07 00:52:45.109661 | orchestrator | Wednesday 07 January 2026 00:52:10 +0000 (0:00:00.832) 0:00:00.979 ***** 2026-01-07 00:52:45.109666 | orchestrator | changed: [testbed-manager] 2026-01-07 00:52:45.109670 | orchestrator | 2026-01-07 00:52:45.109691 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-07 00:52:45.109695 | orchestrator | Wednesday 07 January 2026 00:52:11 +0000 (0:00:01.006) 0:00:01.986 ***** 2026-01-07 00:52:45.109699 | orchestrator | changed: [testbed-manager] 2026-01-07 00:52:45.109703 | orchestrator | 2026-01-07 00:52:45.109706 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:52:45.109711 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:52:45.109717 | orchestrator | 2026-01-07 00:52:45.109721 | orchestrator | 2026-01-07 00:52:45.109725 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:52:45.109729 | orchestrator | Wednesday 07 January 2026 00:52:12 +0000 (0:00:00.422) 0:00:02.408 ***** 2026-01-07 00:52:45.109732 | orchestrator | =============================================================================== 2026-01-07 00:52:45.109754 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.01s 2026-01-07 00:52:45.109759 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.83s 2026-01-07 00:52:45.109763 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.42s 2026-01-07 00:52:45.109769 | orchestrator | 2026-01-07 00:52:45.109774 | orchestrator | 2026-01-07 00:52:45.109780 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-07 00:52:45.109787 | orchestrator | 2026-01-07 00:52:45.109792 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-07 00:52:45.109798 | orchestrator | Wednesday 07 January 2026 00:52:09 +0000 (0:00:00.121) 0:00:00.121 ***** 2026-01-07 00:52:45.109805 | orchestrator | ok: [testbed-manager] 2026-01-07 00:52:45.109813 | orchestrator | 2026-01-07 00:52:45.109819 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-07 00:52:45.109825 | orchestrator | Wednesday 07 January 2026 00:52:09 +0000 (0:00:00.603) 0:00:00.724 ***** 2026-01-07 00:52:45.109831 | orchestrator | ok: [testbed-manager] 2026-01-07 00:52:45.109837 | orchestrator | 2026-01-07 00:52:45.109841 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-07 00:52:45.109845 | orchestrator | Wednesday 07 January 2026 00:52:10 +0000 (0:00:00.545) 0:00:01.270 ***** 2026-01-07 00:52:45.109849 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-07 00:52:45.109853 | orchestrator | 2026-01-07 00:52:45.109857 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-07 00:52:45.109860 | orchestrator | Wednesday 07 January 2026 00:52:11 +0000 (0:00:00.657) 0:00:01.928 ***** 2026-01-07 00:52:45.109864 | orchestrator | changed: [testbed-manager] 2026-01-07 00:52:45.109868 | orchestrator | 2026-01-07 00:52:45.109872 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-07 00:52:45.109875 | orchestrator | Wednesday 07 January 2026 00:52:12 +0000 (0:00:01.263) 0:00:03.191 ***** 2026-01-07 00:52:45.109879 | orchestrator | changed: [testbed-manager] 2026-01-07 00:52:45.109883 | orchestrator | 2026-01-07 00:52:45.109887 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-07 00:52:45.109891 | orchestrator | Wednesday 07 January 2026 00:52:13 +0000 (0:00:00.621) 0:00:03.813 ***** 2026-01-07 00:52:45.109895 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-07 00:52:45.109899 | orchestrator | 2026-01-07 00:52:45.109902 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-07 00:52:45.109906 | orchestrator | Wednesday 07 January 2026 00:52:14 +0000 (0:00:01.444) 0:00:05.257 ***** 2026-01-07 00:52:45.109911 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-07 00:52:45.109915 | orchestrator | 2026-01-07 00:52:45.109918 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-07 00:52:45.109922 | orchestrator | Wednesday 07 January 2026 00:52:15 +0000 (0:00:00.796) 0:00:06.053 ***** 2026-01-07 00:52:45.109927 | orchestrator | ok: [testbed-manager] 2026-01-07 00:52:45.109931 | orchestrator | 2026-01-07 00:52:45.109935 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-07 00:52:45.109939 | orchestrator | Wednesday 07 January 2026 00:52:15 +0000 (0:00:00.376) 0:00:06.430 ***** 2026-01-07 00:52:45.109944 | orchestrator | ok: [testbed-manager] 2026-01-07 00:52:45.109948 | orchestrator | 2026-01-07 00:52:45.109952 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:52:45.109957 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:52:45.109961 | orchestrator | 2026-01-07 00:52:45.109966 | orchestrator | 2026-01-07 00:52:45.109970 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:52:45.109975 | orchestrator | Wednesday 07 January 2026 00:52:16 +0000 (0:00:00.320) 0:00:06.750 ***** 2026-01-07 00:52:45.109979 | orchestrator | =============================================================================== 2026-01-07 00:52:45.109988 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.44s 2026-01-07 00:52:45.109992 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.26s 2026-01-07 00:52:45.109997 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.80s 2026-01-07 00:52:45.110078 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.66s 2026-01-07 00:52:45.110086 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.62s 2026-01-07 00:52:45.110091 | orchestrator | Get home directory of operator user ------------------------------------- 0.60s 2026-01-07 00:52:45.110095 | orchestrator | Create .kube directory -------------------------------------------------- 0.55s 2026-01-07 00:52:45.110099 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.38s 2026-01-07 00:52:45.110104 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.32s 2026-01-07 00:52:45.110108 | orchestrator | 2026-01-07 00:52:45.110112 | orchestrator | 2026-01-07 00:52:45.110117 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-07 00:52:45.110121 | orchestrator | 2026-01-07 00:52:45.110129 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-07 00:52:45.110134 | orchestrator | Wednesday 07 January 2026 00:50:29 +0000 (0:00:00.152) 0:00:00.152 ***** 2026-01-07 00:52:45.110138 | orchestrator | ok: [localhost] => { 2026-01-07 00:52:45.110144 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-07 00:52:45.110149 | orchestrator | } 2026-01-07 00:52:45.110153 | orchestrator | 2026-01-07 00:52:45.110158 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-07 00:52:45.110162 | orchestrator | Wednesday 07 January 2026 00:50:29 +0000 (0:00:00.027) 0:00:00.180 ***** 2026-01-07 00:52:45.110185 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-07 00:52:45.110191 | orchestrator | ...ignoring 2026-01-07 00:52:45.110196 | orchestrator | 2026-01-07 00:52:45.110200 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-07 00:52:45.110204 | orchestrator | Wednesday 07 January 2026 00:50:32 +0000 (0:00:02.907) 0:00:03.087 ***** 2026-01-07 00:52:45.110209 | orchestrator | skipping: [localhost] 2026-01-07 00:52:45.110213 | orchestrator | 2026-01-07 00:52:45.110218 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-07 00:52:45.110222 | orchestrator | Wednesday 07 January 2026 00:50:32 +0000 (0:00:00.038) 0:00:03.126 ***** 2026-01-07 00:52:45.110226 | orchestrator | ok: [localhost] 2026-01-07 00:52:45.110231 | orchestrator | 2026-01-07 00:52:45.110235 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:52:45.110239 | orchestrator | 2026-01-07 00:52:45.110244 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:52:45.110248 | orchestrator | Wednesday 07 January 2026 00:50:32 +0000 (0:00:00.115) 0:00:03.241 ***** 2026-01-07 00:52:45.110252 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:45.110257 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:45.110261 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:45.110265 | orchestrator | 2026-01-07 00:52:45.110270 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:52:45.110274 | orchestrator | Wednesday 07 January 2026 00:50:32 +0000 (0:00:00.243) 0:00:03.485 ***** 2026-01-07 00:52:45.110278 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-07 00:52:45.110283 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-07 00:52:45.110288 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-07 00:52:45.110292 | orchestrator | 2026-01-07 00:52:45.110296 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-07 00:52:45.110306 | orchestrator | 2026-01-07 00:52:45.110310 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-07 00:52:45.110315 | orchestrator | Wednesday 07 January 2026 00:50:33 +0000 (0:00:00.774) 0:00:04.259 ***** 2026-01-07 00:52:45.110319 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:52:45.110324 | orchestrator | 2026-01-07 00:52:45.110329 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-07 00:52:45.110333 | orchestrator | Wednesday 07 January 2026 00:50:34 +0000 (0:00:00.452) 0:00:04.712 ***** 2026-01-07 00:52:45.110338 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:45.110342 | orchestrator | 2026-01-07 00:52:45.110346 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-07 00:52:45.110350 | orchestrator | Wednesday 07 January 2026 00:50:35 +0000 (0:00:00.812) 0:00:05.524 ***** 2026-01-07 00:52:45.110354 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:45.110357 | orchestrator | 2026-01-07 00:52:45.110361 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-07 00:52:45.110365 | orchestrator | Wednesday 07 January 2026 00:50:35 +0000 (0:00:00.384) 0:00:05.908 ***** 2026-01-07 00:52:45.110369 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:45.110372 | orchestrator | 2026-01-07 00:52:45.110376 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-07 00:52:45.110380 | orchestrator | Wednesday 07 January 2026 00:50:35 +0000 (0:00:00.316) 0:00:06.225 ***** 2026-01-07 00:52:45.110384 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:45.110387 | orchestrator | 2026-01-07 00:52:45.110408 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-07 00:52:45.110415 | orchestrator | Wednesday 07 January 2026 00:50:36 +0000 (0:00:00.546) 0:00:06.772 ***** 2026-01-07 00:52:45.110422 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:45.110427 | orchestrator | 2026-01-07 00:52:45.110431 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-07 00:52:45.110435 | orchestrator | Wednesday 07 January 2026 00:50:38 +0000 (0:00:02.415) 0:00:09.187 ***** 2026-01-07 00:52:45.110438 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:52:45.110442 | orchestrator | 2026-01-07 00:52:45.110446 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-07 00:52:45.110455 | orchestrator | Wednesday 07 January 2026 00:50:39 +0000 (0:00:00.781) 0:00:09.969 ***** 2026-01-07 00:52:45.110459 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:45.110463 | orchestrator | 2026-01-07 00:52:45.110467 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-07 00:52:45.110470 | orchestrator | Wednesday 07 January 2026 00:50:40 +0000 (0:00:00.778) 0:00:10.747 ***** 2026-01-07 00:52:45.110474 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:45.110478 | orchestrator | 2026-01-07 00:52:45.110482 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-07 00:52:45.110485 | orchestrator | Wednesday 07 January 2026 00:50:40 +0000 (0:00:00.595) 0:00:11.342 ***** 2026-01-07 00:52:45.110489 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:45.110493 | orchestrator | 2026-01-07 00:52:45.110497 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-07 00:52:45.110504 | orchestrator | Wednesday 07 January 2026 00:50:41 +0000 (0:00:00.423) 0:00:11.766 ***** 2026-01-07 00:52:45.110514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:52:45.110527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:52:45.110532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:52:45.110536 | orchestrator | 2026-01-07 00:52:45.110540 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-07 00:52:45.110544 | orchestrator | Wednesday 07 January 2026 00:50:42 +0000 (0:00:01.260) 0:00:13.027 ***** 2026-01-07 00:52:45.110556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:52:45.110565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:52:45.110570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:52:45.110574 | orchestrator | 2026-01-07 00:52:45.110578 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-07 00:52:45.110582 | orchestrator | Wednesday 07 January 2026 00:50:44 +0000 (0:00:02.073) 0:00:15.101 ***** 2026-01-07 00:52:45.110586 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-07 00:52:45.110589 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-07 00:52:45.110593 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-07 00:52:45.110597 | orchestrator | 2026-01-07 00:52:45.110601 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-07 00:52:45.110605 | orchestrator | Wednesday 07 January 2026 00:50:46 +0000 (0:00:02.389) 0:00:17.490 ***** 2026-01-07 00:52:45.110610 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-07 00:52:45.110616 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-07 00:52:45.110622 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-07 00:52:45.110627 | orchestrator | 2026-01-07 00:52:45.110642 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-07 00:52:45.110648 | orchestrator | Wednesday 07 January 2026 00:50:49 +0000 (0:00:02.500) 0:00:19.990 ***** 2026-01-07 00:52:45.110654 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-07 00:52:45.110660 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-07 00:52:45.110666 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-07 00:52:45.110678 | orchestrator | 2026-01-07 00:52:45.110684 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-07 00:52:45.110690 | orchestrator | Wednesday 07 January 2026 00:50:51 +0000 (0:00:01.756) 0:00:21.747 ***** 2026-01-07 00:52:45.110700 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-07 00:52:45.110707 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-07 00:52:45.110712 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-07 00:52:45.110716 | orchestrator | 2026-01-07 00:52:45.110719 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-07 00:52:45.110723 | orchestrator | Wednesday 07 January 2026 00:50:53 +0000 (0:00:01.959) 0:00:23.706 ***** 2026-01-07 00:52:45.110727 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-07 00:52:45.110731 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-07 00:52:45.110734 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-07 00:52:45.110738 | orchestrator | 2026-01-07 00:52:45.110742 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-07 00:52:45.110745 | orchestrator | Wednesday 07 January 2026 00:50:54 +0000 (0:00:01.411) 0:00:25.118 ***** 2026-01-07 00:52:45.110749 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-07 00:52:45.110753 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-07 00:52:45.110757 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-07 00:52:45.110760 | orchestrator | 2026-01-07 00:52:45.110764 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-07 00:52:45.110769 | orchestrator | Wednesday 07 January 2026 00:50:56 +0000 (0:00:01.612) 0:00:26.730 ***** 2026-01-07 00:52:45.110775 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:45.110785 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:45.110791 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:45.110798 | orchestrator | 2026-01-07 00:52:45.110804 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-01-07 00:52:45.110810 | orchestrator | Wednesday 07 January 2026 00:50:56 +0000 (0:00:00.502) 0:00:27.232 ***** 2026-01-07 00:52:45.110817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:52:45.110831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:52:45.110849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:52:45.110855 | orchestrator | 2026-01-07 00:52:45.110861 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-07 00:52:45.110868 | orchestrator | Wednesday 07 January 2026 00:50:59 +0000 (0:00:02.658) 0:00:29.891 ***** 2026-01-07 00:52:45.110873 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:45.110880 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:45.110886 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:45.110892 | orchestrator | 2026-01-07 00:52:45.110898 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-07 00:52:45.110913 | orchestrator | Wednesday 07 January 2026 00:51:00 +0000 (0:00:01.350) 0:00:31.242 ***** 2026-01-07 00:52:45.110918 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:45.110922 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:45.110926 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:45.110930 | orchestrator | 2026-01-07 00:52:45.110933 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-07 00:52:45.110937 | orchestrator | Wednesday 07 January 2026 00:51:08 +0000 (0:00:07.539) 0:00:38.781 ***** 2026-01-07 00:52:45.110941 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:45.110945 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:45.110949 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:45.110952 | orchestrator | 2026-01-07 00:52:45.110956 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-07 00:52:45.110960 | orchestrator | 2026-01-07 00:52:45.110964 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-07 00:52:45.110968 | orchestrator | Wednesday 07 January 2026 00:51:08 +0000 (0:00:00.356) 0:00:39.137 ***** 2026-01-07 00:52:45.110971 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:45.110976 | orchestrator | 2026-01-07 00:52:45.110980 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-07 00:52:45.110984 | orchestrator | Wednesday 07 January 2026 00:51:09 +0000 (0:00:00.710) 0:00:39.848 ***** 2026-01-07 00:52:45.110987 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:52:45.110991 | orchestrator | 2026-01-07 00:52:45.110995 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-07 00:52:45.111003 | orchestrator | Wednesday 07 January 2026 00:51:09 +0000 (0:00:00.291) 0:00:40.140 ***** 2026-01-07 00:52:45.111007 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:45.111010 | orchestrator | 2026-01-07 00:52:45.111014 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-07 00:52:45.111018 | orchestrator | Wednesday 07 January 2026 00:51:11 +0000 (0:00:02.341) 0:00:42.482 ***** 2026-01-07 00:52:45.111022 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:52:45.111028 | orchestrator | 2026-01-07 00:52:45.111034 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-07 00:52:45.111039 | orchestrator | 2026-01-07 00:52:45.111046 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-07 00:52:45.111051 | orchestrator | Wednesday 07 January 2026 00:52:04 +0000 (0:00:52.384) 0:01:34.867 ***** 2026-01-07 00:52:45.111058 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:45.111063 | orchestrator | 2026-01-07 00:52:45.111070 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-07 00:52:45.111076 | orchestrator | Wednesday 07 January 2026 00:52:04 +0000 (0:00:00.476) 0:01:35.343 ***** 2026-01-07 00:52:45.111082 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:52:45.111089 | orchestrator | 2026-01-07 00:52:45.111095 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-07 00:52:45.111101 | orchestrator | Wednesday 07 January 2026 00:52:05 +0000 (0:00:00.210) 0:01:35.553 ***** 2026-01-07 00:52:45.111107 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:45.111113 | orchestrator | 2026-01-07 00:52:45.111119 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-07 00:52:45.111125 | orchestrator | Wednesday 07 January 2026 00:52:07 +0000 (0:00:02.407) 0:01:37.960 ***** 2026-01-07 00:52:45.111131 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:52:45.111137 | orchestrator | 2026-01-07 00:52:45.111143 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-07 00:52:45.111149 | orchestrator | 2026-01-07 00:52:45.111155 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-07 00:52:45.111166 | orchestrator | Wednesday 07 January 2026 00:52:20 +0000 (0:00:13.258) 0:01:51.219 ***** 2026-01-07 00:52:45.111172 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:45.111179 | orchestrator | 2026-01-07 00:52:45.111184 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-07 00:52:45.111191 | orchestrator | Wednesday 07 January 2026 00:52:21 +0000 (0:00:00.573) 0:01:51.792 ***** 2026-01-07 00:52:45.111197 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:52:45.111203 | orchestrator | 2026-01-07 00:52:45.111209 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-07 00:52:45.111215 | orchestrator | Wednesday 07 January 2026 00:52:21 +0000 (0:00:00.226) 0:01:52.019 ***** 2026-01-07 00:52:45.111221 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:45.111227 | orchestrator | 2026-01-07 00:52:45.111243 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-07 00:52:45.111250 | orchestrator | Wednesday 07 January 2026 00:52:23 +0000 (0:00:01.568) 0:01:53.587 ***** 2026-01-07 00:52:45.111256 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:52:45.111262 | orchestrator | 2026-01-07 00:52:45.111268 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-07 00:52:45.111274 | orchestrator | 2026-01-07 00:52:45.111280 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-07 00:52:45.111286 | orchestrator | Wednesday 07 January 2026 00:52:38 +0000 (0:00:15.375) 0:02:08.963 ***** 2026-01-07 00:52:45.111293 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:52:45.111299 | orchestrator | 2026-01-07 00:52:45.111305 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-07 00:52:45.111311 | orchestrator | Wednesday 07 January 2026 00:52:39 +0000 (0:00:00.742) 0:02:09.706 ***** 2026-01-07 00:52:45.111318 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-07 00:52:45.111329 | orchestrator | enable_outward_rabbitmq_True 2026-01-07 00:52:45.111335 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-07 00:52:45.111341 | orchestrator | outward_rabbitmq_restart 2026-01-07 00:52:45.111347 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:52:45.111354 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:52:45.111359 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:52:45.111366 | orchestrator | 2026-01-07 00:52:45.111372 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-01-07 00:52:45.111378 | orchestrator | skipping: no hosts matched 2026-01-07 00:52:45.111384 | orchestrator | 2026-01-07 00:52:45.111415 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-01-07 00:52:45.111423 | orchestrator | skipping: no hosts matched 2026-01-07 00:52:45.111429 | orchestrator | 2026-01-07 00:52:45.111435 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-01-07 00:52:45.111441 | orchestrator | skipping: no hosts matched 2026-01-07 00:52:45.111447 | orchestrator | 2026-01-07 00:52:45.111454 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:52:45.111460 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-07 00:52:45.111469 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-07 00:52:45.111474 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:52:45.111478 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 00:52:45.111481 | orchestrator | 2026-01-07 00:52:45.111485 | orchestrator | 2026-01-07 00:52:45.111489 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:52:45.111493 | orchestrator | Wednesday 07 January 2026 00:52:42 +0000 (0:00:02.861) 0:02:12.567 ***** 2026-01-07 00:52:45.111496 | orchestrator | =============================================================================== 2026-01-07 00:52:45.111500 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 81.02s 2026-01-07 00:52:45.111504 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.54s 2026-01-07 00:52:45.111507 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 6.32s 2026-01-07 00:52:45.111511 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.91s 2026-01-07 00:52:45.111515 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.86s 2026-01-07 00:52:45.111518 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.66s 2026-01-07 00:52:45.111522 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.50s 2026-01-07 00:52:45.111526 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 2.42s 2026-01-07 00:52:45.111530 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.39s 2026-01-07 00:52:45.111534 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.07s 2026-01-07 00:52:45.111540 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.96s 2026-01-07 00:52:45.111546 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.76s 2026-01-07 00:52:45.111552 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.76s 2026-01-07 00:52:45.111558 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.61s 2026-01-07 00:52:45.111564 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.41s 2026-01-07 00:52:45.111574 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.35s 2026-01-07 00:52:45.111586 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.26s 2026-01-07 00:52:45.111592 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.81s 2026-01-07 00:52:45.111598 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.78s 2026-01-07 00:52:45.111604 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.78s 2026-01-07 00:52:45.111610 | orchestrator | 2026-01-07 00:52:45 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:45.111620 | orchestrator | 2026-01-07 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:48.175027 | orchestrator | 2026-01-07 00:52:48 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:48.175143 | orchestrator | 2026-01-07 00:52:48 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:48.176223 | orchestrator | 2026-01-07 00:52:48 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:48.176275 | orchestrator | 2026-01-07 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:51.209563 | orchestrator | 2026-01-07 00:52:51 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:51.211425 | orchestrator | 2026-01-07 00:52:51 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:51.211474 | orchestrator | 2026-01-07 00:52:51 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:51.211481 | orchestrator | 2026-01-07 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:54.250633 | orchestrator | 2026-01-07 00:52:54 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:54.252356 | orchestrator | 2026-01-07 00:52:54 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:54.253754 | orchestrator | 2026-01-07 00:52:54 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:54.254128 | orchestrator | 2026-01-07 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:52:57.298602 | orchestrator | 2026-01-07 00:52:57 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:52:57.300229 | orchestrator | 2026-01-07 00:52:57 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:52:57.301847 | orchestrator | 2026-01-07 00:52:57 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:52:57.303221 | orchestrator | 2026-01-07 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:00.342623 | orchestrator | 2026-01-07 00:53:00 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:00.344180 | orchestrator | 2026-01-07 00:53:00 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:00.346423 | orchestrator | 2026-01-07 00:53:00 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:53:00.346487 | orchestrator | 2026-01-07 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:03.386614 | orchestrator | 2026-01-07 00:53:03 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:03.389279 | orchestrator | 2026-01-07 00:53:03 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:03.392259 | orchestrator | 2026-01-07 00:53:03 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:53:03.392344 | orchestrator | 2026-01-07 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:06.437033 | orchestrator | 2026-01-07 00:53:06 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:06.437518 | orchestrator | 2026-01-07 00:53:06 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:06.438251 | orchestrator | 2026-01-07 00:53:06 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:53:06.438285 | orchestrator | 2026-01-07 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:09.486957 | orchestrator | 2026-01-07 00:53:09 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:09.489430 | orchestrator | 2026-01-07 00:53:09 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:09.491821 | orchestrator | 2026-01-07 00:53:09 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:53:09.492555 | orchestrator | 2026-01-07 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:12.536917 | orchestrator | 2026-01-07 00:53:12 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:12.540514 | orchestrator | 2026-01-07 00:53:12 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:12.541989 | orchestrator | 2026-01-07 00:53:12 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:53:12.542166 | orchestrator | 2026-01-07 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:15.593174 | orchestrator | 2026-01-07 00:53:15 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:15.594613 | orchestrator | 2026-01-07 00:53:15 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:15.596890 | orchestrator | 2026-01-07 00:53:15 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:53:15.596950 | orchestrator | 2026-01-07 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:18.644027 | orchestrator | 2026-01-07 00:53:18 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:18.646704 | orchestrator | 2026-01-07 00:53:18 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:18.648428 | orchestrator | 2026-01-07 00:53:18 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:53:18.648500 | orchestrator | 2026-01-07 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:21.694246 | orchestrator | 2026-01-07 00:53:21 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:21.694884 | orchestrator | 2026-01-07 00:53:21 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:21.697705 | orchestrator | 2026-01-07 00:53:21 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:53:21.698091 | orchestrator | 2026-01-07 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:24.733798 | orchestrator | 2026-01-07 00:53:24 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:24.735643 | orchestrator | 2026-01-07 00:53:24 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:24.737741 | orchestrator | 2026-01-07 00:53:24 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:53:24.737896 | orchestrator | 2026-01-07 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:27.783613 | orchestrator | 2026-01-07 00:53:27 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:27.783685 | orchestrator | 2026-01-07 00:53:27 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:27.783766 | orchestrator | 2026-01-07 00:53:27 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:53:27.783778 | orchestrator | 2026-01-07 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:30.849241 | orchestrator | 2026-01-07 00:53:30 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:30.850126 | orchestrator | 2026-01-07 00:53:30 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:30.851354 | orchestrator | 2026-01-07 00:53:30 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state STARTED 2026-01-07 00:53:30.851385 | orchestrator | 2026-01-07 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:33.925632 | orchestrator | 2026-01-07 00:53:33 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:33.929336 | orchestrator | 2026-01-07 00:53:33 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:33.932165 | orchestrator | 2026-01-07 00:53:33.933825 | orchestrator | 2026-01-07 00:53:33 | INFO  | Task 23268a6c-35a3-4893-969e-cd6c01849fe9 is in state SUCCESS 2026-01-07 00:53:33.934480 | orchestrator | 2026-01-07 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:33.935772 | orchestrator | 2026-01-07 00:53:33.935800 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:53:33.935805 | orchestrator | 2026-01-07 00:53:33.935809 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:53:33.935813 | orchestrator | Wednesday 07 January 2026 00:51:12 +0000 (0:00:00.164) 0:00:00.164 ***** 2026-01-07 00:53:33.935817 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:53:33.935848 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:53:33.935852 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:53:33.935856 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.935860 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.935863 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.935867 | orchestrator | 2026-01-07 00:53:33.935871 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:53:33.935875 | orchestrator | Wednesday 07 January 2026 00:51:13 +0000 (0:00:00.678) 0:00:00.843 ***** 2026-01-07 00:53:33.935879 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-07 00:53:33.935883 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-07 00:53:33.935887 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-07 00:53:33.935891 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-07 00:53:33.935895 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-07 00:53:33.935906 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-07 00:53:33.935910 | orchestrator | 2026-01-07 00:53:33.935914 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-07 00:53:33.935917 | orchestrator | 2026-01-07 00:53:33.935921 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-07 00:53:33.935925 | orchestrator | Wednesday 07 January 2026 00:51:14 +0000 (0:00:01.084) 0:00:01.928 ***** 2026-01-07 00:53:33.935929 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:33.935934 | orchestrator | 2026-01-07 00:53:33.935938 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-07 00:53:33.935942 | orchestrator | Wednesday 07 January 2026 00:51:15 +0000 (0:00:00.985) 0:00:02.913 ***** 2026-01-07 00:53:33.935963 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.935972 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.935979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.935985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.935991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936012 | orchestrator | 2026-01-07 00:53:33.936018 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-07 00:53:33.936025 | orchestrator | Wednesday 07 January 2026 00:51:16 +0000 (0:00:01.334) 0:00:04.248 ***** 2026-01-07 00:53:33.936031 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936046 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936059 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936077 | orchestrator | 2026-01-07 00:53:33.936083 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-07 00:53:33.936089 | orchestrator | Wednesday 07 January 2026 00:51:17 +0000 (0:00:01.274) 0:00:05.522 ***** 2026-01-07 00:53:33.936095 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936101 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936111 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936142 | orchestrator | 2026-01-07 00:53:33.936148 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-07 00:53:33.936154 | orchestrator | Wednesday 07 January 2026 00:51:19 +0000 (0:00:01.125) 0:00:06.647 ***** 2026-01-07 00:53:33.936160 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936172 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936201 | orchestrator | 2026-01-07 00:53:33.936207 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-01-07 00:53:33.936213 | orchestrator | Wednesday 07 January 2026 00:51:20 +0000 (0:00:01.495) 0:00:08.143 ***** 2026-01-07 00:53:33.936224 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936231 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936237 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.936261 | orchestrator | 2026-01-07 00:53:33.936266 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-07 00:53:33.936272 | orchestrator | Wednesday 07 January 2026 00:51:21 +0000 (0:00:01.321) 0:00:09.464 ***** 2026-01-07 00:53:33.936311 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:53:33.936318 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:33.936324 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:53:33.936330 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:33.936335 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:53:33.936341 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:33.936347 | orchestrator | 2026-01-07 00:53:33.936353 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-07 00:53:33.936358 | orchestrator | Wednesday 07 January 2026 00:51:24 +0000 (0:00:02.301) 0:00:11.765 ***** 2026-01-07 00:53:33.936364 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-07 00:53:33.936370 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-07 00:53:33.936376 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-07 00:53:33.936661 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-07 00:53:33.936670 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-07 00:53:33.936674 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-07 00:53:33.936678 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:53:33.936682 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:53:33.936686 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:53:33.936689 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:53:33.936693 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:53:33.936697 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-07 00:53:33.936704 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-07 00:53:33.936709 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-07 00:53:33.936713 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-07 00:53:33.936716 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-07 00:53:33.936720 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:53:33.936725 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-07 00:53:33.936728 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-07 00:53:33.936732 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:53:33.936736 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:53:33.936740 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:53:33.936743 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:53:33.936747 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:53:33.936751 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-07 00:53:33.936755 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:53:33.936759 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:53:33.936762 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:53:33.936766 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:53:33.936770 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:53:33.936774 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-07 00:53:33.936778 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:53:33.936784 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:53:33.936788 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:53:33.936792 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-07 00:53:33.936796 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:53:33.936800 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-07 00:53:33.936803 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-07 00:53:33.936807 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-07 00:53:33.936811 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-07 00:53:33.936818 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-07 00:53:33.936822 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-07 00:53:33.936826 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-07 00:53:33.936830 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-07 00:53:33.936834 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-07 00:53:33.936838 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-07 00:53:33.936842 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-07 00:53:33.936848 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-07 00:53:33.936852 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-07 00:53:33.936856 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-07 00:53:33.936860 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-07 00:53:33.936864 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-07 00:53:33.936868 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-07 00:53:33.936871 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-07 00:53:33.936875 | orchestrator | 2026-01-07 00:53:33.936879 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:53:33.936883 | orchestrator | Wednesday 07 January 2026 00:51:45 +0000 (0:00:21.084) 0:00:32.849 ***** 2026-01-07 00:53:33.936887 | orchestrator | 2026-01-07 00:53:33.936891 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:53:33.936895 | orchestrator | Wednesday 07 January 2026 00:51:45 +0000 (0:00:00.072) 0:00:32.922 ***** 2026-01-07 00:53:33.936898 | orchestrator | 2026-01-07 00:53:33.936902 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:53:33.936906 | orchestrator | Wednesday 07 January 2026 00:51:45 +0000 (0:00:00.269) 0:00:33.192 ***** 2026-01-07 00:53:33.936910 | orchestrator | 2026-01-07 00:53:33.936917 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:53:33.936921 | orchestrator | Wednesday 07 January 2026 00:51:45 +0000 (0:00:00.210) 0:00:33.402 ***** 2026-01-07 00:53:33.936924 | orchestrator | 2026-01-07 00:53:33.936928 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:53:33.936932 | orchestrator | Wednesday 07 January 2026 00:51:46 +0000 (0:00:00.209) 0:00:33.612 ***** 2026-01-07 00:53:33.936936 | orchestrator | 2026-01-07 00:53:33.936940 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-07 00:53:33.936943 | orchestrator | Wednesday 07 January 2026 00:51:46 +0000 (0:00:00.166) 0:00:33.779 ***** 2026-01-07 00:53:33.936947 | orchestrator | 2026-01-07 00:53:33.936951 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-07 00:53:33.936955 | orchestrator | Wednesday 07 January 2026 00:51:46 +0000 (0:00:00.062) 0:00:33.841 ***** 2026-01-07 00:53:33.936959 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:53:33.936963 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:53:33.936966 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.936970 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:53:33.936974 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.936978 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.936982 | orchestrator | 2026-01-07 00:53:33.936985 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-07 00:53:33.936989 | orchestrator | Wednesday 07 January 2026 00:51:47 +0000 (0:00:01.457) 0:00:35.299 ***** 2026-01-07 00:53:33.936993 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:33.936997 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:53:33.937001 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:53:33.937005 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:33.937008 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:33.937012 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:53:33.937016 | orchestrator | 2026-01-07 00:53:33.937020 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-07 00:53:33.937024 | orchestrator | 2026-01-07 00:53:33.937028 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-07 00:53:33.937031 | orchestrator | Wednesday 07 January 2026 00:52:13 +0000 (0:00:25.610) 0:01:00.910 ***** 2026-01-07 00:53:33.937035 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:33.937039 | orchestrator | 2026-01-07 00:53:33.937043 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-07 00:53:33.937047 | orchestrator | Wednesday 07 January 2026 00:52:14 +0000 (0:00:01.226) 0:01:02.137 ***** 2026-01-07 00:53:33.937050 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:33.937054 | orchestrator | 2026-01-07 00:53:33.937061 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-07 00:53:33.937065 | orchestrator | Wednesday 07 January 2026 00:52:15 +0000 (0:00:00.672) 0:01:02.809 ***** 2026-01-07 00:53:33.937068 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.937072 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.937076 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.937080 | orchestrator | 2026-01-07 00:53:33.937084 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-07 00:53:33.937087 | orchestrator | Wednesday 07 January 2026 00:52:16 +0000 (0:00:00.881) 0:01:03.691 ***** 2026-01-07 00:53:33.937091 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.937095 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.937099 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.937103 | orchestrator | 2026-01-07 00:53:33.937106 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-07 00:53:33.937110 | orchestrator | Wednesday 07 January 2026 00:52:16 +0000 (0:00:00.506) 0:01:04.197 ***** 2026-01-07 00:53:33.937114 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.937120 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.937124 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.937128 | orchestrator | 2026-01-07 00:53:33.937132 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-07 00:53:33.937140 | orchestrator | Wednesday 07 January 2026 00:52:16 +0000 (0:00:00.363) 0:01:04.560 ***** 2026-01-07 00:53:33.937144 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.937148 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.937152 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.937155 | orchestrator | 2026-01-07 00:53:33.937159 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-07 00:53:33.937163 | orchestrator | Wednesday 07 January 2026 00:52:17 +0000 (0:00:00.345) 0:01:04.906 ***** 2026-01-07 00:53:33.937167 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.937171 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.937174 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.937178 | orchestrator | 2026-01-07 00:53:33.937182 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-07 00:53:33.937186 | orchestrator | Wednesday 07 January 2026 00:52:17 +0000 (0:00:00.514) 0:01:05.420 ***** 2026-01-07 00:53:33.937190 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937193 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937197 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937201 | orchestrator | 2026-01-07 00:53:33.937205 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-07 00:53:33.937209 | orchestrator | Wednesday 07 January 2026 00:52:18 +0000 (0:00:00.314) 0:01:05.735 ***** 2026-01-07 00:53:33.937212 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937216 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937220 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937224 | orchestrator | 2026-01-07 00:53:33.937228 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-07 00:53:33.937232 | orchestrator | Wednesday 07 January 2026 00:52:18 +0000 (0:00:00.271) 0:01:06.007 ***** 2026-01-07 00:53:33.937235 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937239 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937243 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937247 | orchestrator | 2026-01-07 00:53:33.937251 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-07 00:53:33.937254 | orchestrator | Wednesday 07 January 2026 00:52:18 +0000 (0:00:00.293) 0:01:06.300 ***** 2026-01-07 00:53:33.937258 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937262 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937267 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937271 | orchestrator | 2026-01-07 00:53:33.937287 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-07 00:53:33.937292 | orchestrator | Wednesday 07 January 2026 00:52:19 +0000 (0:00:00.403) 0:01:06.704 ***** 2026-01-07 00:53:33.937296 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937300 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937305 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937309 | orchestrator | 2026-01-07 00:53:33.937314 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-07 00:53:33.937318 | orchestrator | Wednesday 07 January 2026 00:52:19 +0000 (0:00:00.281) 0:01:06.985 ***** 2026-01-07 00:53:33.937323 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937327 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937331 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937336 | orchestrator | 2026-01-07 00:53:33.937341 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-07 00:53:33.937345 | orchestrator | Wednesday 07 January 2026 00:52:19 +0000 (0:00:00.285) 0:01:07.271 ***** 2026-01-07 00:53:33.937349 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937354 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937361 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937365 | orchestrator | 2026-01-07 00:53:33.937370 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-07 00:53:33.937374 | orchestrator | Wednesday 07 January 2026 00:52:20 +0000 (0:00:00.423) 0:01:07.694 ***** 2026-01-07 00:53:33.937379 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937383 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937388 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937392 | orchestrator | 2026-01-07 00:53:33.937396 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-07 00:53:33.937400 | orchestrator | Wednesday 07 January 2026 00:52:20 +0000 (0:00:00.789) 0:01:08.483 ***** 2026-01-07 00:53:33.937404 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937409 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937413 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937417 | orchestrator | 2026-01-07 00:53:33.937422 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-07 00:53:33.937426 | orchestrator | Wednesday 07 January 2026 00:52:21 +0000 (0:00:00.312) 0:01:08.796 ***** 2026-01-07 00:53:33.937431 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937435 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937439 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937444 | orchestrator | 2026-01-07 00:53:33.937451 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-07 00:53:33.937455 | orchestrator | Wednesday 07 January 2026 00:52:21 +0000 (0:00:00.345) 0:01:09.141 ***** 2026-01-07 00:53:33.937460 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937464 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937468 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937472 | orchestrator | 2026-01-07 00:53:33.937477 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-07 00:53:33.937481 | orchestrator | Wednesday 07 January 2026 00:52:21 +0000 (0:00:00.315) 0:01:09.457 ***** 2026-01-07 00:53:33.937486 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937491 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937495 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937499 | orchestrator | 2026-01-07 00:53:33.937504 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-07 00:53:33.937508 | orchestrator | Wednesday 07 January 2026 00:52:22 +0000 (0:00:00.346) 0:01:09.803 ***** 2026-01-07 00:53:33.937512 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:53:33.937517 | orchestrator | 2026-01-07 00:53:33.937521 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-07 00:53:33.937528 | orchestrator | Wednesday 07 January 2026 00:52:23 +0000 (0:00:00.833) 0:01:10.637 ***** 2026-01-07 00:53:33.937534 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.937541 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.937547 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.937554 | orchestrator | 2026-01-07 00:53:33.937560 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-07 00:53:33.937567 | orchestrator | Wednesday 07 January 2026 00:52:23 +0000 (0:00:00.478) 0:01:11.115 ***** 2026-01-07 00:53:33.937571 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.937576 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.937580 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.937585 | orchestrator | 2026-01-07 00:53:33.937589 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-07 00:53:33.937593 | orchestrator | Wednesday 07 January 2026 00:52:23 +0000 (0:00:00.455) 0:01:11.571 ***** 2026-01-07 00:53:33.937598 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937602 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937607 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937611 | orchestrator | 2026-01-07 00:53:33.937616 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-07 00:53:33.937624 | orchestrator | Wednesday 07 January 2026 00:52:24 +0000 (0:00:00.589) 0:01:12.160 ***** 2026-01-07 00:53:33.937628 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937633 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937639 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937646 | orchestrator | 2026-01-07 00:53:33.937653 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-07 00:53:33.937657 | orchestrator | Wednesday 07 January 2026 00:52:25 +0000 (0:00:00.439) 0:01:12.600 ***** 2026-01-07 00:53:33.937661 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937665 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937669 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937672 | orchestrator | 2026-01-07 00:53:33.937676 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-07 00:53:33.937683 | orchestrator | Wednesday 07 January 2026 00:52:25 +0000 (0:00:00.461) 0:01:13.062 ***** 2026-01-07 00:53:33.937694 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937701 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937710 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937719 | orchestrator | 2026-01-07 00:53:33.937725 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-07 00:53:33.937742 | orchestrator | Wednesday 07 January 2026 00:52:25 +0000 (0:00:00.360) 0:01:13.422 ***** 2026-01-07 00:53:33.937753 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937760 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937766 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937772 | orchestrator | 2026-01-07 00:53:33.937779 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-07 00:53:33.937785 | orchestrator | Wednesday 07 January 2026 00:52:26 +0000 (0:00:00.582) 0:01:14.004 ***** 2026-01-07 00:53:33.937792 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.937799 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.937805 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.937812 | orchestrator | 2026-01-07 00:53:33.937819 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-07 00:53:33.937825 | orchestrator | Wednesday 07 January 2026 00:52:26 +0000 (0:00:00.313) 0:01:14.318 ***** 2026-01-07 00:53:33.937832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937905 | orchestrator | 2026-01-07 00:53:33.937909 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-07 00:53:33.937913 | orchestrator | Wednesday 07 January 2026 00:52:28 +0000 (0:00:01.434) 0:01:15.752 ***** 2026-01-07 00:53:33.937917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937961 | orchestrator | 2026-01-07 00:53:33.937965 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-07 00:53:33.937969 | orchestrator | Wednesday 07 January 2026 00:52:32 +0000 (0:00:04.286) 0:01:20.039 ***** 2026-01-07 00:53:33.937973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.937981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938072 | orchestrator | 2026-01-07 00:53:33.938076 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:53:33.938081 | orchestrator | Wednesday 07 January 2026 00:52:35 +0000 (0:00:02.854) 0:01:22.894 ***** 2026-01-07 00:53:33.938084 | orchestrator | 2026-01-07 00:53:33.938088 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:53:33.938092 | orchestrator | Wednesday 07 January 2026 00:52:35 +0000 (0:00:00.139) 0:01:23.034 ***** 2026-01-07 00:53:33.938096 | orchestrator | 2026-01-07 00:53:33.938100 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:53:33.938104 | orchestrator | Wednesday 07 January 2026 00:52:35 +0000 (0:00:00.132) 0:01:23.166 ***** 2026-01-07 00:53:33.938107 | orchestrator | 2026-01-07 00:53:33.938111 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-07 00:53:33.938115 | orchestrator | Wednesday 07 January 2026 00:52:35 +0000 (0:00:00.104) 0:01:23.271 ***** 2026-01-07 00:53:33.938119 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:33.938123 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:33.938127 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:33.938131 | orchestrator | 2026-01-07 00:53:33.938135 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-07 00:53:33.938139 | orchestrator | Wednesday 07 January 2026 00:52:42 +0000 (0:00:07.145) 0:01:30.417 ***** 2026-01-07 00:53:33.938142 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:33.938147 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:33.938151 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:33.938158 | orchestrator | 2026-01-07 00:53:33.938162 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-07 00:53:33.938166 | orchestrator | Wednesday 07 January 2026 00:52:45 +0000 (0:00:02.831) 0:01:33.248 ***** 2026-01-07 00:53:33.938170 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:33.938174 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:33.938178 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:33.938182 | orchestrator | 2026-01-07 00:53:33.938188 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-07 00:53:33.938196 | orchestrator | Wednesday 07 January 2026 00:52:53 +0000 (0:00:07.439) 0:01:40.688 ***** 2026-01-07 00:53:33.938207 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.938214 | orchestrator | 2026-01-07 00:53:33.938221 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-07 00:53:33.938227 | orchestrator | Wednesday 07 January 2026 00:52:53 +0000 (0:00:00.395) 0:01:41.083 ***** 2026-01-07 00:53:33.938234 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.938240 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.938246 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.938253 | orchestrator | 2026-01-07 00:53:33.938265 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-07 00:53:33.938272 | orchestrator | Wednesday 07 January 2026 00:52:54 +0000 (0:00:00.898) 0:01:41.981 ***** 2026-01-07 00:53:33.938295 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.938302 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.938308 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:33.938314 | orchestrator | 2026-01-07 00:53:33.938320 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-07 00:53:33.938326 | orchestrator | Wednesday 07 January 2026 00:52:55 +0000 (0:00:00.664) 0:01:42.646 ***** 2026-01-07 00:53:33.938332 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.938338 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.938344 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.938351 | orchestrator | 2026-01-07 00:53:33.938357 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-07 00:53:33.938363 | orchestrator | Wednesday 07 January 2026 00:52:55 +0000 (0:00:00.825) 0:01:43.472 ***** 2026-01-07 00:53:33.938369 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.938375 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.938382 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:33.938388 | orchestrator | 2026-01-07 00:53:33.938394 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-07 00:53:33.938404 | orchestrator | Wednesday 07 January 2026 00:52:56 +0000 (0:00:00.705) 0:01:44.177 ***** 2026-01-07 00:53:33.938410 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.938416 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.938423 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.938429 | orchestrator | 2026-01-07 00:53:33.938435 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-07 00:53:33.938441 | orchestrator | Wednesday 07 January 2026 00:52:57 +0000 (0:00:01.142) 0:01:45.320 ***** 2026-01-07 00:53:33.938448 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.938454 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.938460 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.938466 | orchestrator | 2026-01-07 00:53:33.938472 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-07 00:53:33.938479 | orchestrator | Wednesday 07 January 2026 00:52:58 +0000 (0:00:00.812) 0:01:46.132 ***** 2026-01-07 00:53:33.938486 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.938492 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.938498 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.938505 | orchestrator | 2026-01-07 00:53:33.938511 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-07 00:53:33.938518 | orchestrator | Wednesday 07 January 2026 00:52:58 +0000 (0:00:00.342) 0:01:46.475 ***** 2026-01-07 00:53:33.938530 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938537 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938544 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938550 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938557 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938563 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938576 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938583 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938592 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938598 | orchestrator | 2026-01-07 00:53:33.938604 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-07 00:53:33.938611 | orchestrator | Wednesday 07 January 2026 00:53:00 +0000 (0:00:01.634) 0:01:48.109 ***** 2026-01-07 00:53:33.938621 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938627 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938634 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938641 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938669 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938694 | orchestrator | 2026-01-07 00:53:33.938701 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-07 00:53:33.938711 | orchestrator | Wednesday 07 January 2026 00:53:05 +0000 (0:00:04.663) 0:01:52.773 ***** 2026-01-07 00:53:33.938717 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938723 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938730 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938749 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938775 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 00:53:33.938781 | orchestrator | 2026-01-07 00:53:33.938790 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:53:33.938797 | orchestrator | Wednesday 07 January 2026 00:53:08 +0000 (0:00:03.301) 0:01:56.074 ***** 2026-01-07 00:53:33.938803 | orchestrator | 2026-01-07 00:53:33.938810 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:53:33.938816 | orchestrator | Wednesday 07 January 2026 00:53:08 +0000 (0:00:00.065) 0:01:56.139 ***** 2026-01-07 00:53:33.938822 | orchestrator | 2026-01-07 00:53:33.938829 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-07 00:53:33.938835 | orchestrator | Wednesday 07 January 2026 00:53:08 +0000 (0:00:00.064) 0:01:56.204 ***** 2026-01-07 00:53:33.938841 | orchestrator | 2026-01-07 00:53:33.938847 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-07 00:53:33.938854 | orchestrator | Wednesday 07 January 2026 00:53:08 +0000 (0:00:00.062) 0:01:56.267 ***** 2026-01-07 00:53:33.938860 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:33.938866 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:33.938873 | orchestrator | 2026-01-07 00:53:33.938879 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-07 00:53:33.938886 | orchestrator | Wednesday 07 January 2026 00:53:14 +0000 (0:00:06.233) 0:02:02.500 ***** 2026-01-07 00:53:33.938892 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:33.938899 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:33.938905 | orchestrator | 2026-01-07 00:53:33.938911 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-07 00:53:33.938917 | orchestrator | Wednesday 07 January 2026 00:53:21 +0000 (0:00:06.208) 0:02:08.709 ***** 2026-01-07 00:53:33.938923 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:53:33.938929 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:53:33.938935 | orchestrator | 2026-01-07 00:53:33.938940 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-07 00:53:33.938946 | orchestrator | Wednesday 07 January 2026 00:53:27 +0000 (0:00:06.535) 0:02:15.244 ***** 2026-01-07 00:53:33.938952 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:53:33.938958 | orchestrator | 2026-01-07 00:53:33.938965 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-07 00:53:33.938971 | orchestrator | Wednesday 07 January 2026 00:53:27 +0000 (0:00:00.130) 0:02:15.375 ***** 2026-01-07 00:53:33.938977 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.938983 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.938989 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.938996 | orchestrator | 2026-01-07 00:53:33.939002 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-07 00:53:33.939029 | orchestrator | Wednesday 07 January 2026 00:53:28 +0000 (0:00:00.753) 0:02:16.129 ***** 2026-01-07 00:53:33.939036 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.939042 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.939049 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:33.939055 | orchestrator | 2026-01-07 00:53:33.939061 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-07 00:53:33.939067 | orchestrator | Wednesday 07 January 2026 00:53:29 +0000 (0:00:00.610) 0:02:16.740 ***** 2026-01-07 00:53:33.939074 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.939080 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.939086 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.939092 | orchestrator | 2026-01-07 00:53:33.939099 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-07 00:53:33.939105 | orchestrator | Wednesday 07 January 2026 00:53:29 +0000 (0:00:00.806) 0:02:17.547 ***** 2026-01-07 00:53:33.939111 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:53:33.939117 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:53:33.939123 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:53:33.939130 | orchestrator | 2026-01-07 00:53:33.939136 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-07 00:53:33.939148 | orchestrator | Wednesday 07 January 2026 00:53:30 +0000 (0:00:00.613) 0:02:18.160 ***** 2026-01-07 00:53:33.939154 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.939160 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.939167 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.939173 | orchestrator | 2026-01-07 00:53:33.939179 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-07 00:53:33.939185 | orchestrator | Wednesday 07 January 2026 00:53:31 +0000 (0:00:00.834) 0:02:18.994 ***** 2026-01-07 00:53:33.939192 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:53:33.939198 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:53:33.939204 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:53:33.939210 | orchestrator | 2026-01-07 00:53:33.939217 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:53:33.939223 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-07 00:53:33.939230 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-07 00:53:33.939242 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-07 00:53:33.939248 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:53:33.939255 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:53:33.939261 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 00:53:33.939267 | orchestrator | 2026-01-07 00:53:33.939273 | orchestrator | 2026-01-07 00:53:33.939309 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:53:33.939317 | orchestrator | Wednesday 07 January 2026 00:53:32 +0000 (0:00:01.008) 0:02:20.003 ***** 2026-01-07 00:53:33.939323 | orchestrator | =============================================================================== 2026-01-07 00:53:33.939333 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 25.61s 2026-01-07 00:53:33.939340 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.08s 2026-01-07 00:53:33.939346 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.97s 2026-01-07 00:53:33.939352 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.38s 2026-01-07 00:53:33.939358 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.04s 2026-01-07 00:53:33.939364 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.66s 2026-01-07 00:53:33.939371 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.29s 2026-01-07 00:53:33.939377 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.30s 2026-01-07 00:53:33.939383 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.85s 2026-01-07 00:53:33.939388 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.30s 2026-01-07 00:53:33.939394 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.63s 2026-01-07 00:53:33.939401 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.50s 2026-01-07 00:53:33.939407 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.46s 2026-01-07 00:53:33.939413 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.43s 2026-01-07 00:53:33.939419 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.33s 2026-01-07 00:53:33.939425 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.32s 2026-01-07 00:53:33.939436 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.27s 2026-01-07 00:53:33.939443 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.23s 2026-01-07 00:53:33.939449 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.14s 2026-01-07 00:53:33.939455 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.13s 2026-01-07 00:53:36.995779 | orchestrator | 2026-01-07 00:53:36 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:36.996204 | orchestrator | 2026-01-07 00:53:36 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:36.998359 | orchestrator | 2026-01-07 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:40.043043 | orchestrator | 2026-01-07 00:53:40 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:40.043098 | orchestrator | 2026-01-07 00:53:40 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:40.043105 | orchestrator | 2026-01-07 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:43.090906 | orchestrator | 2026-01-07 00:53:43 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:43.093782 | orchestrator | 2026-01-07 00:53:43 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:43.093840 | orchestrator | 2026-01-07 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:46.139223 | orchestrator | 2026-01-07 00:53:46 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:46.141749 | orchestrator | 2026-01-07 00:53:46 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:46.141823 | orchestrator | 2026-01-07 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:49.186700 | orchestrator | 2026-01-07 00:53:49 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:49.188321 | orchestrator | 2026-01-07 00:53:49 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:49.188548 | orchestrator | 2026-01-07 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:52.229698 | orchestrator | 2026-01-07 00:53:52 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:52.231895 | orchestrator | 2026-01-07 00:53:52 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:52.232023 | orchestrator | 2026-01-07 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:55.279577 | orchestrator | 2026-01-07 00:53:55 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:55.280794 | orchestrator | 2026-01-07 00:53:55 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:55.280869 | orchestrator | 2026-01-07 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:53:58.327425 | orchestrator | 2026-01-07 00:53:58 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:53:58.329877 | orchestrator | 2026-01-07 00:53:58 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:53:58.330156 | orchestrator | 2026-01-07 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:01.373224 | orchestrator | 2026-01-07 00:54:01 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:01.375674 | orchestrator | 2026-01-07 00:54:01 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:01.375771 | orchestrator | 2026-01-07 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:04.427306 | orchestrator | 2026-01-07 00:54:04 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:04.430843 | orchestrator | 2026-01-07 00:54:04 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:04.430922 | orchestrator | 2026-01-07 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:07.469676 | orchestrator | 2026-01-07 00:54:07 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:07.471869 | orchestrator | 2026-01-07 00:54:07 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:07.471969 | orchestrator | 2026-01-07 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:10.511822 | orchestrator | 2026-01-07 00:54:10 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:10.513526 | orchestrator | 2026-01-07 00:54:10 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:10.513658 | orchestrator | 2026-01-07 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:13.555684 | orchestrator | 2026-01-07 00:54:13 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:13.556454 | orchestrator | 2026-01-07 00:54:13 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:13.556493 | orchestrator | 2026-01-07 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:16.593960 | orchestrator | 2026-01-07 00:54:16 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:16.596397 | orchestrator | 2026-01-07 00:54:16 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:16.596474 | orchestrator | 2026-01-07 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:19.644071 | orchestrator | 2026-01-07 00:54:19 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:19.644155 | orchestrator | 2026-01-07 00:54:19 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:19.644186 | orchestrator | 2026-01-07 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:22.697273 | orchestrator | 2026-01-07 00:54:22 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:22.699496 | orchestrator | 2026-01-07 00:54:22 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:22.699550 | orchestrator | 2026-01-07 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:25.728931 | orchestrator | 2026-01-07 00:54:25 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:25.729340 | orchestrator | 2026-01-07 00:54:25 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:25.729370 | orchestrator | 2026-01-07 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:28.777008 | orchestrator | 2026-01-07 00:54:28 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:28.778606 | orchestrator | 2026-01-07 00:54:28 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:28.778655 | orchestrator | 2026-01-07 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:31.822954 | orchestrator | 2026-01-07 00:54:31 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:31.823616 | orchestrator | 2026-01-07 00:54:31 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:31.823655 | orchestrator | 2026-01-07 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:34.871394 | orchestrator | 2026-01-07 00:54:34 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:34.874241 | orchestrator | 2026-01-07 00:54:34 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:34.874454 | orchestrator | 2026-01-07 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:37.922761 | orchestrator | 2026-01-07 00:54:37 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:37.922818 | orchestrator | 2026-01-07 00:54:37 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:37.922827 | orchestrator | 2026-01-07 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:40.965755 | orchestrator | 2026-01-07 00:54:40 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:40.967055 | orchestrator | 2026-01-07 00:54:40 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:40.967193 | orchestrator | 2026-01-07 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:44.020315 | orchestrator | 2026-01-07 00:54:44 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:44.027329 | orchestrator | 2026-01-07 00:54:44 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:44.027382 | orchestrator | 2026-01-07 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:47.082303 | orchestrator | 2026-01-07 00:54:47 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:47.083682 | orchestrator | 2026-01-07 00:54:47 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:47.083736 | orchestrator | 2026-01-07 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:50.130736 | orchestrator | 2026-01-07 00:54:50 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:50.130786 | orchestrator | 2026-01-07 00:54:50 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:50.130792 | orchestrator | 2026-01-07 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:53.176025 | orchestrator | 2026-01-07 00:54:53 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:53.177307 | orchestrator | 2026-01-07 00:54:53 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:53.177358 | orchestrator | 2026-01-07 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:56.217807 | orchestrator | 2026-01-07 00:54:56 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:56.217910 | orchestrator | 2026-01-07 00:54:56 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:56.217922 | orchestrator | 2026-01-07 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:54:59.258561 | orchestrator | 2026-01-07 00:54:59 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:54:59.261440 | orchestrator | 2026-01-07 00:54:59 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:54:59.261616 | orchestrator | 2026-01-07 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:02.307733 | orchestrator | 2026-01-07 00:55:02 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:02.308396 | orchestrator | 2026-01-07 00:55:02 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:02.308448 | orchestrator | 2026-01-07 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:05.349242 | orchestrator | 2026-01-07 00:55:05 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:05.351848 | orchestrator | 2026-01-07 00:55:05 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:05.351904 | orchestrator | 2026-01-07 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:08.400130 | orchestrator | 2026-01-07 00:55:08 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:08.401269 | orchestrator | 2026-01-07 00:55:08 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:08.401491 | orchestrator | 2026-01-07 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:11.449463 | orchestrator | 2026-01-07 00:55:11 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:11.449528 | orchestrator | 2026-01-07 00:55:11 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:11.449539 | orchestrator | 2026-01-07 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:14.489795 | orchestrator | 2026-01-07 00:55:14 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:14.492359 | orchestrator | 2026-01-07 00:55:14 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:14.492418 | orchestrator | 2026-01-07 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:17.538899 | orchestrator | 2026-01-07 00:55:17 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:17.539154 | orchestrator | 2026-01-07 00:55:17 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:17.539227 | orchestrator | 2026-01-07 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:20.584731 | orchestrator | 2026-01-07 00:55:20 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:20.586436 | orchestrator | 2026-01-07 00:55:20 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:20.586513 | orchestrator | 2026-01-07 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:23.630866 | orchestrator | 2026-01-07 00:55:23 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:23.631631 | orchestrator | 2026-01-07 00:55:23 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:23.631661 | orchestrator | 2026-01-07 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:26.677796 | orchestrator | 2026-01-07 00:55:26 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:26.678639 | orchestrator | 2026-01-07 00:55:26 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:26.678965 | orchestrator | 2026-01-07 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:29.721881 | orchestrator | 2026-01-07 00:55:29 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:29.722206 | orchestrator | 2026-01-07 00:55:29 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:29.722249 | orchestrator | 2026-01-07 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:32.758831 | orchestrator | 2026-01-07 00:55:32 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:32.760123 | orchestrator | 2026-01-07 00:55:32 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:32.760187 | orchestrator | 2026-01-07 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:35.809754 | orchestrator | 2026-01-07 00:55:35 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:35.811624 | orchestrator | 2026-01-07 00:55:35 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:35.811729 | orchestrator | 2026-01-07 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:38.855172 | orchestrator | 2026-01-07 00:55:38 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:38.857191 | orchestrator | 2026-01-07 00:55:38 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:38.857367 | orchestrator | 2026-01-07 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:41.910468 | orchestrator | 2026-01-07 00:55:41 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:41.915732 | orchestrator | 2026-01-07 00:55:41 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:41.916622 | orchestrator | 2026-01-07 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:44.952622 | orchestrator | 2026-01-07 00:55:44 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:44.954272 | orchestrator | 2026-01-07 00:55:44 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:44.954552 | orchestrator | 2026-01-07 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:48.008114 | orchestrator | 2026-01-07 00:55:48 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:48.008808 | orchestrator | 2026-01-07 00:55:48 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:48.008850 | orchestrator | 2026-01-07 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:51.050475 | orchestrator | 2026-01-07 00:55:51 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:51.052852 | orchestrator | 2026-01-07 00:55:51 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:51.053007 | orchestrator | 2026-01-07 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:54.104292 | orchestrator | 2026-01-07 00:55:54 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:54.106293 | orchestrator | 2026-01-07 00:55:54 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:54.106349 | orchestrator | 2026-01-07 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:55:57.147386 | orchestrator | 2026-01-07 00:55:57 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:55:57.149857 | orchestrator | 2026-01-07 00:55:57 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:55:57.150064 | orchestrator | 2026-01-07 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:00.198988 | orchestrator | 2026-01-07 00:56:00 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:00.202880 | orchestrator | 2026-01-07 00:56:00 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:56:00.202986 | orchestrator | 2026-01-07 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:03.244572 | orchestrator | 2026-01-07 00:56:03 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:03.245093 | orchestrator | 2026-01-07 00:56:03 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:56:03.245255 | orchestrator | 2026-01-07 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:06.282858 | orchestrator | 2026-01-07 00:56:06 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:06.285214 | orchestrator | 2026-01-07 00:56:06 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:56:06.285296 | orchestrator | 2026-01-07 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:09.331473 | orchestrator | 2026-01-07 00:56:09 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:09.332939 | orchestrator | 2026-01-07 00:56:09 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:56:09.333320 | orchestrator | 2026-01-07 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:12.369673 | orchestrator | 2026-01-07 00:56:12 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:12.370923 | orchestrator | 2026-01-07 00:56:12 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:56:12.370973 | orchestrator | 2026-01-07 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:15.413000 | orchestrator | 2026-01-07 00:56:15 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:15.413060 | orchestrator | 2026-01-07 00:56:15 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:56:15.413069 | orchestrator | 2026-01-07 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:18.445076 | orchestrator | 2026-01-07 00:56:18 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:18.446679 | orchestrator | 2026-01-07 00:56:18 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:56:18.446753 | orchestrator | 2026-01-07 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:21.499076 | orchestrator | 2026-01-07 00:56:21 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:21.501766 | orchestrator | 2026-01-07 00:56:21 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:56:21.501864 | orchestrator | 2026-01-07 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:24.546857 | orchestrator | 2026-01-07 00:56:24 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:24.548051 | orchestrator | 2026-01-07 00:56:24 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:56:24.548097 | orchestrator | 2026-01-07 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:27.585574 | orchestrator | 2026-01-07 00:56:27 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:27.586349 | orchestrator | 2026-01-07 00:56:27 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state STARTED 2026-01-07 00:56:27.586407 | orchestrator | 2026-01-07 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:30.627701 | orchestrator | 2026-01-07 00:56:30 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:30.635347 | orchestrator | 2026-01-07 00:56:30 | INFO  | Task 6503c060-925c-41c9-8b19-cf7b0a46d012 is in state SUCCESS 2026-01-07 00:56:30.635432 | orchestrator | 2026-01-07 00:56:30.639036 | orchestrator | 2026-01-07 00:56:30.639114 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:56:30.639122 | orchestrator | 2026-01-07 00:56:30.639127 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:56:30.639131 | orchestrator | Wednesday 07 January 2026 00:50:07 +0000 (0:00:00.206) 0:00:00.206 ***** 2026-01-07 00:56:30.639138 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:30.639146 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:30.639152 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:30.639158 | orchestrator | 2026-01-07 00:56:30.639164 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:56:30.639170 | orchestrator | Wednesday 07 January 2026 00:50:08 +0000 (0:00:00.251) 0:00:00.457 ***** 2026-01-07 00:56:30.639177 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-07 00:56:30.639183 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-07 00:56:30.639189 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-07 00:56:30.639194 | orchestrator | 2026-01-07 00:56:30.639200 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-07 00:56:30.639206 | orchestrator | 2026-01-07 00:56:30.639212 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-07 00:56:30.639218 | orchestrator | Wednesday 07 January 2026 00:50:08 +0000 (0:00:00.406) 0:00:00.864 ***** 2026-01-07 00:56:30.639225 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.639231 | orchestrator | 2026-01-07 00:56:30.639237 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-07 00:56:30.639244 | orchestrator | Wednesday 07 January 2026 00:50:09 +0000 (0:00:00.595) 0:00:01.459 ***** 2026-01-07 00:56:30.639280 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:30.639303 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:30.639309 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:30.639315 | orchestrator | 2026-01-07 00:56:30.639321 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-07 00:56:30.639327 | orchestrator | Wednesday 07 January 2026 00:50:09 +0000 (0:00:00.744) 0:00:02.204 ***** 2026-01-07 00:56:30.639339 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.639345 | orchestrator | 2026-01-07 00:56:30.639351 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-07 00:56:30.639358 | orchestrator | Wednesday 07 January 2026 00:50:10 +0000 (0:00:01.041) 0:00:03.246 ***** 2026-01-07 00:56:30.639365 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:30.639371 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:30.639378 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:30.639384 | orchestrator | 2026-01-07 00:56:30.639391 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-07 00:56:30.639402 | orchestrator | Wednesday 07 January 2026 00:50:11 +0000 (0:00:00.695) 0:00:03.942 ***** 2026-01-07 00:56:30.639412 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:56:30.639420 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:56:30.639424 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:56:30.639429 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-07 00:56:30.639434 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:56:30.639438 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-07 00:56:30.639464 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:56:30.639468 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-07 00:56:30.639472 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-07 00:56:30.639476 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-07 00:56:30.639479 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-07 00:56:30.639487 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-07 00:56:30.639490 | orchestrator | 2026-01-07 00:56:30.639494 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-07 00:56:30.639498 | orchestrator | Wednesday 07 January 2026 00:50:16 +0000 (0:00:05.388) 0:00:09.331 ***** 2026-01-07 00:56:30.639502 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-07 00:56:30.639506 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-07 00:56:30.639510 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-07 00:56:30.639514 | orchestrator | 2026-01-07 00:56:30.639518 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-07 00:56:30.639521 | orchestrator | Wednesday 07 January 2026 00:50:17 +0000 (0:00:00.936) 0:00:10.268 ***** 2026-01-07 00:56:30.639539 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-07 00:56:30.639545 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-07 00:56:30.639549 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-07 00:56:30.639562 | orchestrator | 2026-01-07 00:56:30.639567 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-07 00:56:30.639571 | orchestrator | Wednesday 07 January 2026 00:50:19 +0000 (0:00:01.525) 0:00:11.793 ***** 2026-01-07 00:56:30.639576 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-07 00:56:30.639583 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.639606 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-07 00:56:30.639615 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.639622 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-07 00:56:30.639629 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.639635 | orchestrator | 2026-01-07 00:56:30.639641 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-07 00:56:30.639648 | orchestrator | Wednesday 07 January 2026 00:50:20 +0000 (0:00:00.638) 0:00:12.431 ***** 2026-01-07 00:56:30.639656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-07 00:56:30.639669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-07 00:56:30.639683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-07 00:56:30.639690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:56:30.639698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:56:30.639715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:56:30.639724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:56:30.639731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:56:30.639738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:56:30.639749 | orchestrator | 2026-01-07 00:56:30.639756 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-07 00:56:30.639762 | orchestrator | Wednesday 07 January 2026 00:50:22 +0000 (0:00:02.471) 0:00:14.903 ***** 2026-01-07 00:56:30.639769 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.639775 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.639781 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.639788 | orchestrator | 2026-01-07 00:56:30.639794 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-07 00:56:30.639816 | orchestrator | Wednesday 07 January 2026 00:50:23 +0000 (0:00:01.099) 0:00:16.003 ***** 2026-01-07 00:56:30.639823 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-07 00:56:30.639830 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-07 00:56:30.639836 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-07 00:56:30.639842 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-07 00:56:30.639848 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-07 00:56:30.639871 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-07 00:56:30.639877 | orchestrator | 2026-01-07 00:56:30.639884 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-07 00:56:30.639891 | orchestrator | Wednesday 07 January 2026 00:50:26 +0000 (0:00:02.560) 0:00:18.563 ***** 2026-01-07 00:56:30.639898 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.639903 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.639909 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.639915 | orchestrator | 2026-01-07 00:56:30.639921 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-07 00:56:30.639926 | orchestrator | Wednesday 07 January 2026 00:50:27 +0000 (0:00:01.589) 0:00:20.153 ***** 2026-01-07 00:56:30.639932 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:30.639938 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:30.639944 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:30.639950 | orchestrator | 2026-01-07 00:56:30.639956 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-07 00:56:30.639961 | orchestrator | Wednesday 07 January 2026 00:50:29 +0000 (0:00:02.066) 0:00:22.219 ***** 2026-01-07 00:56:30.639972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.639984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.639991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__be5e0471ece8be30f32dd77199d62c1c617b1408', '__omit_place_holder__be5e0471ece8be30f32dd77199d62c1c617b1408'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:56:30.640009 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.640015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.640021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.640027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__be5e0471ece8be30f32dd77199d62c1c617b1408', '__omit_place_holder__be5e0471ece8be30f32dd77199d62c1c617b1408'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:56:30.640053 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.640059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.640072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.640078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__be5e0471ece8be30f32dd77199d62c1c617b1408', '__omit_place_holder__be5e0471ece8be30f32dd77199d62c1c617b1408'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:56:30.640090 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.640099 | orchestrator | 2026-01-07 00:56:30.640105 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-07 00:56:30.640111 | orchestrator | Wednesday 07 January 2026 00:50:30 +0000 (0:00:00.710) 0:00:22.930 ***** 2026-01-07 00:56:30.640117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__be5e0471ece8be30f32dd77199d62c1c617b1408', '__omit_place_holder__be5e0471ece8be30f32dd77199d62c1c617b1408'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:56:30.640176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__be5e0471ece8be30f32dd77199d62c1c617b1408', '__omit_place_holder__be5e0471ece8be30f32dd77199d62c1c617b1408'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:56:30.640203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__be5e0471ece8be30f32dd77199d62c1c617b1408', '__omit_place_holder__be5e0471ece8be30f32dd77199d62c1c617b1408'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-07 00:56:30.640219 | orchestrator | 2026-01-07 00:56:30.640223 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-07 00:56:30.640227 | orchestrator | Wednesday 07 January 2026 00:50:33 +0000 (0:00:02.731) 0:00:25.661 ***** 2026-01-07 00:56:30.640231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:56:30.640293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:56:30.640300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:56:30.640307 | orchestrator | 2026-01-07 00:56:30.640311 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-07 00:56:30.640315 | orchestrator | Wednesday 07 January 2026 00:50:36 +0000 (0:00:03.077) 0:00:28.738 ***** 2026-01-07 00:56:30.640319 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-07 00:56:30.640327 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-07 00:56:30.640331 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-07 00:56:30.640335 | orchestrator | 2026-01-07 00:56:30.640339 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-07 00:56:30.640343 | orchestrator | Wednesday 07 January 2026 00:50:40 +0000 (0:00:03.880) 0:00:32.619 ***** 2026-01-07 00:56:30.640346 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-07 00:56:30.640350 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-07 00:56:30.640354 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-07 00:56:30.640358 | orchestrator | 2026-01-07 00:56:30.640362 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-07 00:56:30.640365 | orchestrator | Wednesday 07 January 2026 00:50:44 +0000 (0:00:04.049) 0:00:36.668 ***** 2026-01-07 00:56:30.640372 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.640375 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.640379 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.640383 | orchestrator | 2026-01-07 00:56:30.640387 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-07 00:56:30.640391 | orchestrator | Wednesday 07 January 2026 00:50:45 +0000 (0:00:00.936) 0:00:37.605 ***** 2026-01-07 00:56:30.640395 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-07 00:56:30.640401 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-07 00:56:30.640404 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-07 00:56:30.640408 | orchestrator | 2026-01-07 00:56:30.640412 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-07 00:56:30.640416 | orchestrator | Wednesday 07 January 2026 00:50:48 +0000 (0:00:03.293) 0:00:40.898 ***** 2026-01-07 00:56:30.640420 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-07 00:56:30.640423 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-07 00:56:30.640427 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-07 00:56:30.640431 | orchestrator | 2026-01-07 00:56:30.640435 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-07 00:56:30.640438 | orchestrator | Wednesday 07 January 2026 00:50:50 +0000 (0:00:02.379) 0:00:43.277 ***** 2026-01-07 00:56:30.640442 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-07 00:56:30.640446 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-07 00:56:30.640450 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-07 00:56:30.640457 | orchestrator | 2026-01-07 00:56:30.640461 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-07 00:56:30.640465 | orchestrator | Wednesday 07 January 2026 00:50:52 +0000 (0:00:01.739) 0:00:45.017 ***** 2026-01-07 00:56:30.640469 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-07 00:56:30.640473 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-07 00:56:30.640483 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-07 00:56:30.640487 | orchestrator | 2026-01-07 00:56:30.640491 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-07 00:56:30.640499 | orchestrator | Wednesday 07 January 2026 00:50:54 +0000 (0:00:01.527) 0:00:46.544 ***** 2026-01-07 00:56:30.640502 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.640506 | orchestrator | 2026-01-07 00:56:30.640510 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-01-07 00:56:30.640514 | orchestrator | Wednesday 07 January 2026 00:50:54 +0000 (0:00:00.824) 0:00:47.369 ***** 2026-01-07 00:56:30.640521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:56:30.640552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:56:30.640559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:56:30.640567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:56:30.640571 | orchestrator | 2026-01-07 00:56:30.640575 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-01-07 00:56:30.640578 | orchestrator | Wednesday 07 January 2026 00:50:58 +0000 (0:00:03.476) 0:00:50.845 ***** 2026-01-07 00:56:30.640582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.640586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.640593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640597 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.640601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.640605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.640614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640618 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.640639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.640644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.640651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640655 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.640659 | orchestrator | 2026-01-07 00:56:30.640662 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-01-07 00:56:30.640666 | orchestrator | Wednesday 07 January 2026 00:51:00 +0000 (0:00:01.659) 0:00:52.504 ***** 2026-01-07 00:56:30.640670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.640676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.640686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640690 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.640694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.640698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.640705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640709 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.640718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.640722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.640732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640736 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.640743 | orchestrator | 2026-01-07 00:56:30.640750 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-07 00:56:30.640753 | orchestrator | Wednesday 07 January 2026 00:51:01 +0000 (0:00:01.620) 0:00:54.125 ***** 2026-01-07 00:56:30.640760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.640764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.640772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640775 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.640779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.640783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.640787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640791 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.640800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.640822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.640838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640842 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.640846 | orchestrator | 2026-01-07 00:56:30.640850 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-07 00:56:30.640854 | orchestrator | Wednesday 07 January 2026 00:51:02 +0000 (0:00:00.701) 0:00:54.826 ***** 2026-01-07 00:56:30.640858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.640862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.640866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640869 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.640876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.640885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.640892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640896 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.640900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.640904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.640908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.640912 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.640916 | orchestrator | 2026-01-07 00:56:30.640920 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-07 00:56:30.640924 | orchestrator | Wednesday 07 January 2026 00:51:03 +0000 (0:00:00.679) 0:00:55.506 ***** 2026-01-07 00:56:30.640931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.642073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.642105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.642109 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.642126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.642131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.642138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.642145 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.642149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.642163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.642172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.642177 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.642180 | orchestrator | 2026-01-07 00:56:30.642184 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-01-07 00:56:30.642189 | orchestrator | Wednesday 07 January 2026 00:51:03 +0000 (0:00:00.744) 0:00:56.251 ***** 2026-01-07 00:56:30.642192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.642197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.642201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.642208 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.642212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.642219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.642230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.642238 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.642242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.642246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.642253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.642256 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.642260 | orchestrator | 2026-01-07 00:56:30.642264 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-01-07 00:56:30.642268 | orchestrator | Wednesday 07 January 2026 00:51:04 +0000 (0:00:00.671) 0:00:56.922 ***** 2026-01-07 00:56:30.642272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.642283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.642294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.642299 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.642302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.642306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.642310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.642314 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.642329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.642333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.642343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.642349 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.642353 | orchestrator | 2026-01-07 00:56:30.642357 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-01-07 00:56:30.642367 | orchestrator | Wednesday 07 January 2026 00:51:05 +0000 (0:00:00.507) 0:00:57.430 ***** 2026-01-07 00:56:30.642371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.642375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.642379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.642383 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.642387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.642395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.642401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.642405 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.642415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-07 00:56:30.642419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-07 00:56:30.642423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-07 00:56:30.642427 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.642431 | orchestrator | 2026-01-07 00:56:30.642435 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-07 00:56:30.642439 | orchestrator | Wednesday 07 January 2026 00:51:05 +0000 (0:00:00.702) 0:00:58.132 ***** 2026-01-07 00:56:30.642442 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-07 00:56:30.642446 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-07 00:56:30.642450 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-07 00:56:30.642454 | orchestrator | 2026-01-07 00:56:30.642458 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-07 00:56:30.642468 | orchestrator | Wednesday 07 January 2026 00:51:07 +0000 (0:00:01.519) 0:00:59.651 ***** 2026-01-07 00:56:30.642472 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-07 00:56:30.642476 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-07 00:56:30.642480 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-07 00:56:30.642484 | orchestrator | 2026-01-07 00:56:30.642488 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-07 00:56:30.642492 | orchestrator | Wednesday 07 January 2026 00:51:08 +0000 (0:00:01.434) 0:01:01.086 ***** 2026-01-07 00:56:30.642495 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 00:56:30.642499 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 00:56:30.642503 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 00:56:30.642507 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.642511 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 00:56:30.642514 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 00:56:30.642518 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.642522 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 00:56:30.642526 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.642529 | orchestrator | 2026-01-07 00:56:30.642533 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-01-07 00:56:30.642537 | orchestrator | Wednesday 07 January 2026 00:51:09 +0000 (0:00:01.246) 0:01:02.332 ***** 2026-01-07 00:56:30.642548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-07 00:56:30.642553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-07 00:56:30.642557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-07 00:56:30.642564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:56:30.642568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:56:30.642572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-07 00:56:30.642578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:56:30.642586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:56:30.642590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-07 00:56:30.642594 | orchestrator | 2026-01-07 00:56:30.642598 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-07 00:56:30.642601 | orchestrator | Wednesday 07 January 2026 00:51:12 +0000 (0:00:02.838) 0:01:05.171 ***** 2026-01-07 00:56:30.642605 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.642615 | orchestrator | 2026-01-07 00:56:30.642621 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-07 00:56:30.642629 | orchestrator | Wednesday 07 January 2026 00:51:13 +0000 (0:00:00.601) 0:01:05.773 ***** 2026-01-07 00:56:30.642639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-07 00:56:30.642647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.642654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.642664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.642676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-07 00:56:30.642682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.642693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.642699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.642705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-07 00:56:30.642714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.642724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.642731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.642742 | orchestrator | 2026-01-07 00:56:30.642748 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-07 00:56:30.642762 | orchestrator | Wednesday 07 January 2026 00:51:17 +0000 (0:00:04.248) 0:01:10.021 ***** 2026-01-07 00:56:30.642769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-07 00:56:30.642776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-07 00:56:30.642783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.642798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.642818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.642830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.642834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.642838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.642842 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.642846 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.642850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-07 00:56:30.642857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.642867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.642875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.642879 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.642882 | orchestrator | 2026-01-07 00:56:30.642886 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-07 00:56:30.642890 | orchestrator | Wednesday 07 January 2026 00:51:18 +0000 (0:00:01.077) 0:01:11.098 ***** 2026-01-07 00:56:30.642894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-07 00:56:30.642900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-07 00:56:30.642904 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.642908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-07 00:56:30.642913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-07 00:56:30.642916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-07 00:56:30.642920 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.642924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-07 00:56:30.642928 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.642932 | orchestrator | 2026-01-07 00:56:30.642935 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-07 00:56:30.642939 | orchestrator | Wednesday 07 January 2026 00:51:19 +0000 (0:00:00.767) 0:01:11.866 ***** 2026-01-07 00:56:30.642943 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.642947 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.642950 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.642954 | orchestrator | 2026-01-07 00:56:30.642958 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-07 00:56:30.642962 | orchestrator | Wednesday 07 January 2026 00:51:20 +0000 (0:00:01.209) 0:01:13.075 ***** 2026-01-07 00:56:30.642965 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.642969 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.642973 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.642976 | orchestrator | 2026-01-07 00:56:30.642980 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-07 00:56:30.642984 | orchestrator | Wednesday 07 January 2026 00:51:22 +0000 (0:00:01.841) 0:01:14.916 ***** 2026-01-07 00:56:30.642988 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.642991 | orchestrator | 2026-01-07 00:56:30.642995 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-07 00:56:30.643003 | orchestrator | Wednesday 07 January 2026 00:51:23 +0000 (0:00:00.682) 0:01:15.599 ***** 2026-01-07 00:56:30.643015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.643020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.643024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.643029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.643033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.643044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.643054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.643058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.643062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.643066 | orchestrator | 2026-01-07 00:56:30.643070 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-07 00:56:30.643074 | orchestrator | Wednesday 07 January 2026 00:51:27 +0000 (0:00:04.464) 0:01:20.064 ***** 2026-01-07 00:56:30.643078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.643086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.643094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.643102 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.643120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.643125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.643129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.643133 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.643137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.643150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.643154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.643158 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.643162 | orchestrator | 2026-01-07 00:56:30.643166 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-07 00:56:30.643170 | orchestrator | Wednesday 07 January 2026 00:51:28 +0000 (0:00:00.602) 0:01:20.666 ***** 2026-01-07 00:56:30.643174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-07 00:56:30.643178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-07 00:56:30.643182 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.643186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-07 00:56:30.643190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-07 00:56:30.643194 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.643198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-07 00:56:30.643201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-07 00:56:30.643205 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.643209 | orchestrator | 2026-01-07 00:56:30.643216 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-07 00:56:30.643220 | orchestrator | Wednesday 07 January 2026 00:51:29 +0000 (0:00:01.055) 0:01:21.722 ***** 2026-01-07 00:56:30.643224 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.643228 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.643231 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.643235 | orchestrator | 2026-01-07 00:56:30.643239 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-07 00:56:30.643243 | orchestrator | Wednesday 07 January 2026 00:51:30 +0000 (0:00:01.297) 0:01:23.019 ***** 2026-01-07 00:56:30.643247 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.643250 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.643257 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.643261 | orchestrator | 2026-01-07 00:56:30.643265 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-07 00:56:30.643269 | orchestrator | Wednesday 07 January 2026 00:51:32 +0000 (0:00:02.099) 0:01:25.119 ***** 2026-01-07 00:56:30.643273 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.643279 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.643283 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.643289 | orchestrator | 2026-01-07 00:56:30.643293 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-07 00:56:30.643297 | orchestrator | Wednesday 07 January 2026 00:51:33 +0000 (0:00:00.308) 0:01:25.427 ***** 2026-01-07 00:56:30.643300 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.643307 | orchestrator | 2026-01-07 00:56:30.643311 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-07 00:56:30.643315 | orchestrator | Wednesday 07 January 2026 00:51:33 +0000 (0:00:00.955) 0:01:26.382 ***** 2026-01-07 00:56:30.644826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-07 00:56:30.644881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-07 00:56:30.644888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-07 00:56:30.644907 | orchestrator | 2026-01-07 00:56:30.644914 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-07 00:56:30.644921 | orchestrator | Wednesday 07 January 2026 00:51:36 +0000 (0:00:02.585) 0:01:28.967 ***** 2026-01-07 00:56:30.644928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-07 00:56:30.644933 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.644944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-07 00:56:30.644950 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.644967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-07 00:56:30.644973 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.644979 | orchestrator | 2026-01-07 00:56:30.644985 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-07 00:56:30.644991 | orchestrator | Wednesday 07 January 2026 00:51:38 +0000 (0:00:01.912) 0:01:30.879 ***** 2026-01-07 00:56:30.645005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:56:30.645018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:56:30.645025 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.645031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:56:30.645037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:56:30.645043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:56:30.645049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-07 00:56:30.645055 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.645060 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.645066 | orchestrator | 2026-01-07 00:56:30.645072 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-07 00:56:30.645078 | orchestrator | Wednesday 07 January 2026 00:51:40 +0000 (0:00:01.840) 0:01:32.720 ***** 2026-01-07 00:56:30.645083 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.645093 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.645099 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.645105 | orchestrator | 2026-01-07 00:56:30.645111 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-07 00:56:30.645118 | orchestrator | Wednesday 07 January 2026 00:51:41 +0000 (0:00:00.787) 0:01:33.508 ***** 2026-01-07 00:56:30.645124 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.645131 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.645138 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.645143 | orchestrator | 2026-01-07 00:56:30.645150 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-07 00:56:30.645163 | orchestrator | Wednesday 07 January 2026 00:51:42 +0000 (0:00:01.294) 0:01:34.802 ***** 2026-01-07 00:56:30.645169 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.645175 | orchestrator | 2026-01-07 00:56:30.645181 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-07 00:56:30.645187 | orchestrator | Wednesday 07 January 2026 00:51:43 +0000 (0:00:00.802) 0:01:35.604 ***** 2026-01-07 00:56:30.645194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.645206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.645241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.645268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645293 | orchestrator | 2026-01-07 00:56:30.645297 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-07 00:56:30.645301 | orchestrator | Wednesday 07 January 2026 00:51:47 +0000 (0:00:04.234) 0:01:39.839 ***** 2026-01-07 00:56:30.645304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.645308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645342 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.645379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.645384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645396 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.645406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.645445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645458 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.645462 | orchestrator | 2026-01-07 00:56:30.645466 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-07 00:56:30.645469 | orchestrator | Wednesday 07 January 2026 00:51:48 +0000 (0:00:01.020) 0:01:40.860 ***** 2026-01-07 00:56:30.645474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-07 00:56:30.645479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-07 00:56:30.645484 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.645488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-07 00:56:30.645492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-07 00:56:30.645496 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.645499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-07 00:56:30.645503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-07 00:56:30.645511 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.645515 | orchestrator | 2026-01-07 00:56:30.645518 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-07 00:56:30.645526 | orchestrator | Wednesday 07 January 2026 00:51:49 +0000 (0:00:00.890) 0:01:41.750 ***** 2026-01-07 00:56:30.645530 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.645534 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.645538 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.645541 | orchestrator | 2026-01-07 00:56:30.645545 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-07 00:56:30.645549 | orchestrator | Wednesday 07 January 2026 00:51:50 +0000 (0:00:01.480) 0:01:43.231 ***** 2026-01-07 00:56:30.645553 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.645556 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.645560 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.645564 | orchestrator | 2026-01-07 00:56:30.645571 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-07 00:56:30.645575 | orchestrator | Wednesday 07 January 2026 00:51:53 +0000 (0:00:02.226) 0:01:45.458 ***** 2026-01-07 00:56:30.645578 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.645582 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.645586 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.645589 | orchestrator | 2026-01-07 00:56:30.645593 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-07 00:56:30.645597 | orchestrator | Wednesday 07 January 2026 00:51:53 +0000 (0:00:00.578) 0:01:46.036 ***** 2026-01-07 00:56:30.645601 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.645604 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.645608 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.645612 | orchestrator | 2026-01-07 00:56:30.645616 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-07 00:56:30.645619 | orchestrator | Wednesday 07 January 2026 00:51:53 +0000 (0:00:00.328) 0:01:46.365 ***** 2026-01-07 00:56:30.645623 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.645627 | orchestrator | 2026-01-07 00:56:30.645631 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-07 00:56:30.645634 | orchestrator | Wednesday 07 January 2026 00:51:54 +0000 (0:00:00.837) 0:01:47.203 ***** 2026-01-07 00:56:30.645639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 00:56:30.645643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:56:30.645651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 00:56:30.645685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:56:30.645689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 00:56:30.645722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:56:30.645729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645756 | orchestrator | 2026-01-07 00:56:30.645760 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-07 00:56:30.645764 | orchestrator | Wednesday 07 January 2026 00:51:59 +0000 (0:00:05.092) 0:01:52.295 ***** 2026-01-07 00:56:30.645767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 00:56:30.645778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:56:30.645783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645846 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.645853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 00:56:30.645861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:56:30.645865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645891 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.645898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 00:56:30.645902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 00:56:30.645909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.645935 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.645939 | orchestrator | 2026-01-07 00:56:30.645943 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-07 00:56:30.645946 | orchestrator | Wednesday 07 January 2026 00:52:00 +0000 (0:00:01.062) 0:01:53.358 ***** 2026-01-07 00:56:30.645951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-07 00:56:30.645955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-07 00:56:30.645959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-07 00:56:30.645969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-07 00:56:30.645973 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.645977 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.645981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-07 00:56:30.645985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-07 00:56:30.645989 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.645992 | orchestrator | 2026-01-07 00:56:30.645996 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-07 00:56:30.646000 | orchestrator | Wednesday 07 January 2026 00:52:01 +0000 (0:00:00.990) 0:01:54.348 ***** 2026-01-07 00:56:30.646003 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.646007 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.646011 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.646054 | orchestrator | 2026-01-07 00:56:30.646058 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-07 00:56:30.646061 | orchestrator | Wednesday 07 January 2026 00:52:03 +0000 (0:00:01.606) 0:01:55.954 ***** 2026-01-07 00:56:30.646065 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.646069 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.646073 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.646076 | orchestrator | 2026-01-07 00:56:30.646080 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-07 00:56:30.646084 | orchestrator | Wednesday 07 January 2026 00:52:05 +0000 (0:00:01.598) 0:01:57.553 ***** 2026-01-07 00:56:30.646087 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.646091 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.646095 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.646098 | orchestrator | 2026-01-07 00:56:30.646102 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-07 00:56:30.646106 | orchestrator | Wednesday 07 January 2026 00:52:05 +0000 (0:00:00.467) 0:01:58.021 ***** 2026-01-07 00:56:30.646109 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.646113 | orchestrator | 2026-01-07 00:56:30.646117 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-07 00:56:30.646120 | orchestrator | Wednesday 07 January 2026 00:52:06 +0000 (0:00:00.788) 0:01:58.810 ***** 2026-01-07 00:56:30.646144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 00:56:30.646155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:56:30.646163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 00:56:30.646175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:56:30.646180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 00:56:30.646190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:56:30.646201 | orchestrator | 2026-01-07 00:56:30.646205 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-07 00:56:30.646208 | orchestrator | Wednesday 07 January 2026 00:52:12 +0000 (0:00:05.721) 0:02:04.531 ***** 2026-01-07 00:56:30.646213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 00:56:30.646223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:56:30.646233 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.646237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 00:56:30.646245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:56:30.646254 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.646259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 00:56:30.646282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-07 00:56:30.646291 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.646295 | orchestrator | 2026-01-07 00:56:30.646299 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-07 00:56:30.646303 | orchestrator | Wednesday 07 January 2026 00:52:15 +0000 (0:00:03.594) 0:02:08.125 ***** 2026-01-07 00:56:30.646307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:56:30.646311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:56:30.646315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:56:30.646320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:56:30.646323 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.646327 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.646331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:56:30.646335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-07 00:56:30.646342 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.646346 | orchestrator | 2026-01-07 00:56:30.646350 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-07 00:56:30.646356 | orchestrator | Wednesday 07 January 2026 00:52:18 +0000 (0:00:03.087) 0:02:11.213 ***** 2026-01-07 00:56:30.646360 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.646364 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.646368 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.646371 | orchestrator | 2026-01-07 00:56:30.646375 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-07 00:56:30.646379 | orchestrator | Wednesday 07 January 2026 00:52:19 +0000 (0:00:01.070) 0:02:12.283 ***** 2026-01-07 00:56:30.646383 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.646387 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.646390 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.646394 | orchestrator | 2026-01-07 00:56:30.646400 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-07 00:56:30.646404 | orchestrator | Wednesday 07 January 2026 00:52:22 +0000 (0:00:02.596) 0:02:14.880 ***** 2026-01-07 00:56:30.646408 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.646412 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.646416 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.646419 | orchestrator | 2026-01-07 00:56:30.646423 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-07 00:56:30.646427 | orchestrator | Wednesday 07 January 2026 00:52:23 +0000 (0:00:00.557) 0:02:15.437 ***** 2026-01-07 00:56:30.646430 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.646434 | orchestrator | 2026-01-07 00:56:30.646438 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-07 00:56:30.646442 | orchestrator | Wednesday 07 January 2026 00:52:23 +0000 (0:00:00.864) 0:02:16.302 ***** 2026-01-07 00:56:30.646446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 00:56:30.646450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 00:56:30.646454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 00:56:30.646461 | orchestrator | 2026-01-07 00:56:30.646465 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-07 00:56:30.646469 | orchestrator | Wednesday 07 January 2026 00:52:27 +0000 (0:00:03.463) 0:02:19.765 ***** 2026-01-07 00:56:30.646473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 00:56:30.646477 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.646486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 00:56:30.646491 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.646495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 00:56:30.646499 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.646502 | orchestrator | 2026-01-07 00:56:30.646506 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-07 00:56:30.646510 | orchestrator | Wednesday 07 January 2026 00:52:27 +0000 (0:00:00.637) 0:02:20.403 ***** 2026-01-07 00:56:30.646514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-07 00:56:30.646517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-07 00:56:30.646521 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.646525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-07 00:56:30.646529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-07 00:56:30.646536 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.646540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-07 00:56:30.646543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-07 00:56:30.646547 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.646551 | orchestrator | 2026-01-07 00:56:30.646555 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-07 00:56:30.646558 | orchestrator | Wednesday 07 January 2026 00:52:28 +0000 (0:00:00.651) 0:02:21.055 ***** 2026-01-07 00:56:30.646562 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.646566 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.646569 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.646573 | orchestrator | 2026-01-07 00:56:30.646577 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-07 00:56:30.646581 | orchestrator | Wednesday 07 January 2026 00:52:30 +0000 (0:00:01.583) 0:02:22.638 ***** 2026-01-07 00:56:30.646584 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.646588 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.646592 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.646595 | orchestrator | 2026-01-07 00:56:30.646599 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-07 00:56:30.646603 | orchestrator | Wednesday 07 January 2026 00:52:32 +0000 (0:00:02.242) 0:02:24.880 ***** 2026-01-07 00:56:30.646607 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.646611 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.646614 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.646618 | orchestrator | 2026-01-07 00:56:30.646622 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-07 00:56:30.646625 | orchestrator | Wednesday 07 January 2026 00:52:33 +0000 (0:00:00.537) 0:02:25.418 ***** 2026-01-07 00:56:30.646629 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.646633 | orchestrator | 2026-01-07 00:56:30.646636 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-07 00:56:30.646643 | orchestrator | Wednesday 07 January 2026 00:52:33 +0000 (0:00:00.938) 0:02:26.356 ***** 2026-01-07 00:56:30.646651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:56:30.646662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:56:30.646671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 00:56:30.646678 | orchestrator | 2026-01-07 00:56:30.646682 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-07 00:56:30.646686 | orchestrator | Wednesday 07 January 2026 00:52:38 +0000 (0:00:04.438) 0:02:30.795 ***** 2026-01-07 00:56:30.646695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:56:30.646700 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.646704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:56:30.646714 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.646723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 00:56:30.646731 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.646735 | orchestrator | 2026-01-07 00:56:30.646739 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-07 00:56:30.646742 | orchestrator | Wednesday 07 January 2026 00:52:39 +0000 (0:00:01.323) 0:02:32.118 ***** 2026-01-07 00:56:30.646747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-07 00:56:30.646753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-07 00:56:30.646758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:56:30.646763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:56:30.646767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-07 00:56:30.646772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-07 00:56:30.646776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:56:30.646781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:56:30.646787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-07 00:56:30.646791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-07 00:56:30.646797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-07 00:56:30.646815 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.646821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:56:30.646833 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.646839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-07 00:56:30.646844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-07 00:56:30.646850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-07 00:56:30.646856 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.646862 | orchestrator | 2026-01-07 00:56:30.646869 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-07 00:56:30.646875 | orchestrator | Wednesday 07 January 2026 00:52:40 +0000 (0:00:00.973) 0:02:33.092 ***** 2026-01-07 00:56:30.646882 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.646888 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.646894 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.646898 | orchestrator | 2026-01-07 00:56:30.646901 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-07 00:56:30.646905 | orchestrator | Wednesday 07 January 2026 00:52:42 +0000 (0:00:01.343) 0:02:34.436 ***** 2026-01-07 00:56:30.646909 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.646913 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.646916 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.646920 | orchestrator | 2026-01-07 00:56:30.646924 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-07 00:56:30.646928 | orchestrator | Wednesday 07 January 2026 00:52:44 +0000 (0:00:02.134) 0:02:36.570 ***** 2026-01-07 00:56:30.646932 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.646935 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.646939 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.646943 | orchestrator | 2026-01-07 00:56:30.646947 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-07 00:56:30.646950 | orchestrator | Wednesday 07 January 2026 00:52:44 +0000 (0:00:00.373) 0:02:36.944 ***** 2026-01-07 00:56:30.646954 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.646958 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.646961 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.646965 | orchestrator | 2026-01-07 00:56:30.646969 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-07 00:56:30.646973 | orchestrator | Wednesday 07 January 2026 00:52:45 +0000 (0:00:00.619) 0:02:37.564 ***** 2026-01-07 00:56:30.646976 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.646980 | orchestrator | 2026-01-07 00:56:30.646984 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-07 00:56:30.646988 | orchestrator | Wednesday 07 January 2026 00:52:46 +0000 (0:00:01.260) 0:02:38.825 ***** 2026-01-07 00:56:30.646995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 00:56:30.647008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:56:30.647013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:56:30.647017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 00:56:30.647021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:56:30.647025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:56:30.647039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 00:56:30.647043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:56:30.647047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:56:30.647051 | orchestrator | 2026-01-07 00:56:30.647055 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-07 00:56:30.647059 | orchestrator | Wednesday 07 January 2026 00:52:50 +0000 (0:00:03.933) 0:02:42.759 ***** 2026-01-07 00:56:30.647063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 00:56:30.647067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:56:30.647077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:56:30.647081 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.647089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 00:56:30.647093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:56:30.647097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:56:30.647101 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.647105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 00:56:30.647118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 00:56:30.647252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 00:56:30.647261 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.647265 | orchestrator | 2026-01-07 00:56:30.647269 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-07 00:56:30.647273 | orchestrator | Wednesday 07 January 2026 00:52:51 +0000 (0:00:01.219) 0:02:43.978 ***** 2026-01-07 00:56:30.647277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-07 00:56:30.647281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-07 00:56:30.647286 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.647294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-07 00:56:30.647298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-07 00:56:30.647302 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.647305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-07 00:56:30.647309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-07 00:56:30.647317 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.647321 | orchestrator | 2026-01-07 00:56:30.647325 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-07 00:56:30.647329 | orchestrator | Wednesday 07 January 2026 00:52:52 +0000 (0:00:00.899) 0:02:44.877 ***** 2026-01-07 00:56:30.647333 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.647336 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.647340 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.647344 | orchestrator | 2026-01-07 00:56:30.647348 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-07 00:56:30.647351 | orchestrator | Wednesday 07 January 2026 00:52:53 +0000 (0:00:01.459) 0:02:46.337 ***** 2026-01-07 00:56:30.647355 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.647359 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.647362 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.647366 | orchestrator | 2026-01-07 00:56:30.647370 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-07 00:56:30.647373 | orchestrator | Wednesday 07 January 2026 00:52:56 +0000 (0:00:02.259) 0:02:48.596 ***** 2026-01-07 00:56:30.647377 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.647381 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.647385 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.647388 | orchestrator | 2026-01-07 00:56:30.647392 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-07 00:56:30.647396 | orchestrator | Wednesday 07 January 2026 00:52:56 +0000 (0:00:00.563) 0:02:49.160 ***** 2026-01-07 00:56:30.647399 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.647403 | orchestrator | 2026-01-07 00:56:30.647407 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-07 00:56:30.647411 | orchestrator | Wednesday 07 January 2026 00:52:57 +0000 (0:00:00.978) 0:02:50.138 ***** 2026-01-07 00:56:30.647422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 00:56:30.647428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 00:56:30.647438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 00:56:30.647456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647460 | orchestrator | 2026-01-07 00:56:30.647464 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-07 00:56:30.647468 | orchestrator | Wednesday 07 January 2026 00:53:01 +0000 (0:00:03.730) 0:02:53.869 ***** 2026-01-07 00:56:30.647472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 00:56:30.647480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647484 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.647488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 00:56:30.647496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647500 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.647506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 00:56:30.647510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647517 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.647521 | orchestrator | 2026-01-07 00:56:30.647525 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-07 00:56:30.647529 | orchestrator | Wednesday 07 January 2026 00:53:02 +0000 (0:00:01.146) 0:02:55.015 ***** 2026-01-07 00:56:30.647533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-07 00:56:30.647537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-07 00:56:30.647541 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.647545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-07 00:56:30.647549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-07 00:56:30.647553 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.647557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-07 00:56:30.647561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-07 00:56:30.647564 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.647568 | orchestrator | 2026-01-07 00:56:30.647572 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-07 00:56:30.647576 | orchestrator | Wednesday 07 January 2026 00:53:03 +0000 (0:00:00.883) 0:02:55.899 ***** 2026-01-07 00:56:30.647580 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.647583 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.647587 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.647591 | orchestrator | 2026-01-07 00:56:30.647594 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-07 00:56:30.647598 | orchestrator | Wednesday 07 January 2026 00:53:05 +0000 (0:00:01.772) 0:02:57.671 ***** 2026-01-07 00:56:30.647602 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.647606 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.647610 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.647613 | orchestrator | 2026-01-07 00:56:30.647620 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-07 00:56:30.647624 | orchestrator | Wednesday 07 January 2026 00:53:07 +0000 (0:00:02.156) 0:02:59.828 ***** 2026-01-07 00:56:30.647627 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.647631 | orchestrator | 2026-01-07 00:56:30.647635 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-07 00:56:30.647639 | orchestrator | Wednesday 07 January 2026 00:53:08 +0000 (0:00:01.264) 0:03:01.092 ***** 2026-01-07 00:56:30.647645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-07 00:56:30.647653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-07 00:56:30.647661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-07 00:56:30.647695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647715 | orchestrator | 2026-01-07 00:56:30.647719 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-07 00:56:30.647723 | orchestrator | Wednesday 07 January 2026 00:53:12 +0000 (0:00:03.846) 0:03:04.938 ***** 2026-01-07 00:56:30.647727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-07 00:56:30.647731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-07 00:56:30.647745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647753 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.647759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647772 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.647775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-07 00:56:30.647780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.647799 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.647822 | orchestrator | 2026-01-07 00:56:30.647826 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-07 00:56:30.647830 | orchestrator | Wednesday 07 January 2026 00:53:13 +0000 (0:00:00.656) 0:03:05.595 ***** 2026-01-07 00:56:30.647834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-07 00:56:30.647838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-07 00:56:30.647842 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.647846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-07 00:56:30.647849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-07 00:56:30.647853 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.647857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-07 00:56:30.647862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-07 00:56:30.647866 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.647871 | orchestrator | 2026-01-07 00:56:30.647875 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-07 00:56:30.647880 | orchestrator | Wednesday 07 January 2026 00:53:14 +0000 (0:00:01.473) 0:03:07.068 ***** 2026-01-07 00:56:30.647884 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.647888 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.647893 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.647897 | orchestrator | 2026-01-07 00:56:30.647903 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-07 00:56:30.647909 | orchestrator | Wednesday 07 January 2026 00:53:16 +0000 (0:00:01.461) 0:03:08.530 ***** 2026-01-07 00:56:30.647916 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.647922 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.647928 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.647934 | orchestrator | 2026-01-07 00:56:30.647940 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-07 00:56:30.647954 | orchestrator | Wednesday 07 January 2026 00:53:18 +0000 (0:00:02.218) 0:03:10.749 ***** 2026-01-07 00:56:30.647961 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.647967 | orchestrator | 2026-01-07 00:56:30.647973 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-07 00:56:30.647979 | orchestrator | Wednesday 07 January 2026 00:53:19 +0000 (0:00:01.311) 0:03:12.060 ***** 2026-01-07 00:56:30.647985 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-07 00:56:30.647991 | orchestrator | 2026-01-07 00:56:30.647996 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-07 00:56:30.648002 | orchestrator | Wednesday 07 January 2026 00:53:22 +0000 (0:00:02.960) 0:03:15.021 ***** 2026-01-07 00:56:30.648020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:30.648030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:56:30.648036 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.648044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:30.648060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:56:30.648066 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.648078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:30.648085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:56:30.648100 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.648106 | orchestrator | 2026-01-07 00:56:30.648112 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-07 00:56:30.648119 | orchestrator | Wednesday 07 January 2026 00:53:24 +0000 (0:00:02.275) 0:03:17.297 ***** 2026-01-07 00:56:30.648133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:30.648141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:56:30.648148 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.648155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:30.648169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:56:30.648174 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.648185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:56:30.648191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-07 00:56:30.648199 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.648203 | orchestrator | 2026-01-07 00:56:30.648208 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-07 00:56:30.648212 | orchestrator | Wednesday 07 January 2026 00:53:27 +0000 (0:00:02.381) 0:03:19.678 ***** 2026-01-07 00:56:30.648217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:56:30.648222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:56:30.648226 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.648232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:56:30.648239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:56:30.648243 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.648260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:56:30.648265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-07 00:56:30.648272 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.648276 | orchestrator | 2026-01-07 00:56:30.648280 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-07 00:56:30.648284 | orchestrator | Wednesday 07 January 2026 00:53:30 +0000 (0:00:03.184) 0:03:22.863 ***** 2026-01-07 00:56:30.648288 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.648292 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.648296 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.648300 | orchestrator | 2026-01-07 00:56:30.648303 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-07 00:56:30.648307 | orchestrator | Wednesday 07 January 2026 00:53:32 +0000 (0:00:01.947) 0:03:24.810 ***** 2026-01-07 00:56:30.648311 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.648315 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.648319 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.648323 | orchestrator | 2026-01-07 00:56:30.648327 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-07 00:56:30.648330 | orchestrator | Wednesday 07 January 2026 00:53:33 +0000 (0:00:01.474) 0:03:26.285 ***** 2026-01-07 00:56:30.648334 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.648338 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.648342 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.648345 | orchestrator | 2026-01-07 00:56:30.648349 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-07 00:56:30.648353 | orchestrator | Wednesday 07 January 2026 00:53:34 +0000 (0:00:00.376) 0:03:26.662 ***** 2026-01-07 00:56:30.648357 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.648360 | orchestrator | 2026-01-07 00:56:30.648364 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-07 00:56:30.648368 | orchestrator | Wednesday 07 January 2026 00:53:35 +0000 (0:00:01.474) 0:03:28.136 ***** 2026-01-07 00:56:30.648372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-07 00:56:30.648383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-07 00:56:30.648388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-07 00:56:30.648395 | orchestrator | 2026-01-07 00:56:30.648398 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-07 00:56:30.648402 | orchestrator | Wednesday 07 January 2026 00:53:37 +0000 (0:00:01.481) 0:03:29.617 ***** 2026-01-07 00:56:30.648406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-07 00:56:30.648411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-07 00:56:30.648415 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.648418 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.648422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-07 00:56:30.648426 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.648430 | orchestrator | 2026-01-07 00:56:30.648434 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-07 00:56:30.648440 | orchestrator | Wednesday 07 January 2026 00:53:37 +0000 (0:00:00.393) 0:03:30.011 ***** 2026-01-07 00:56:30.648445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-07 00:56:30.648451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-07 00:56:30.648459 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.648463 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.648467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-07 00:56:30.648471 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.648474 | orchestrator | 2026-01-07 00:56:30.648478 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-07 00:56:30.648482 | orchestrator | Wednesday 07 January 2026 00:53:38 +0000 (0:00:00.880) 0:03:30.892 ***** 2026-01-07 00:56:30.648486 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.648489 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.648493 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.648497 | orchestrator | 2026-01-07 00:56:30.648501 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-07 00:56:30.648505 | orchestrator | Wednesday 07 January 2026 00:53:38 +0000 (0:00:00.453) 0:03:31.345 ***** 2026-01-07 00:56:30.648508 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.648512 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.648516 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.648520 | orchestrator | 2026-01-07 00:56:30.648523 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-07 00:56:30.648527 | orchestrator | Wednesday 07 January 2026 00:53:40 +0000 (0:00:01.261) 0:03:32.607 ***** 2026-01-07 00:56:30.648531 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.648535 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.648538 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.648542 | orchestrator | 2026-01-07 00:56:30.648546 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-07 00:56:30.648550 | orchestrator | Wednesday 07 January 2026 00:53:40 +0000 (0:00:00.340) 0:03:32.948 ***** 2026-01-07 00:56:30.648553 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.648557 | orchestrator | 2026-01-07 00:56:30.648561 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-07 00:56:30.648565 | orchestrator | Wednesday 07 January 2026 00:53:42 +0000 (0:00:01.506) 0:03:34.454 ***** 2026-01-07 00:56:30.648569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 00:56:30.648573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-07 00:56:30.648709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.648718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.648734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:56:30.648745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 00:56:30.648753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-07 00:56:30.648758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.648776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:56:30.648789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:56:30.648850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-07 00:56:30.648856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.648864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.648868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:56:30.648884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-07 00:56:30.648893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.648897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:56:30.648908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:56:30.648918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 00:56:30.648922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-07 00:56:30.648946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.648954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.648958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:56:30.648969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-07 00:56:30.648984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.648989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.648993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:56:30.648997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:56:30.649004 | orchestrator | 2026-01-07 00:56:30.649008 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-07 00:56:30.649013 | orchestrator | Wednesday 07 January 2026 00:53:46 +0000 (0:00:04.411) 0:03:38.865 ***** 2026-01-07 00:56:30.649017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 00:56:30.649026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-07 00:56:30.649045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 00:56:30.649058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.649063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.649074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:56:30.649099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-07 00:56:30.649110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-07 00:56:30.649138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 00:56:30.649146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.649152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.649159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.649176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:56:30.649217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:56:30.649232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:56:30.649244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-07 00:56:30.649252 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.649258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-07 00:56:30.649280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.649284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.649288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.649297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:56:30.649316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:56:30.649320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:56:30.649328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649332 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.649341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-07 00:56:30.649349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-07 00:56:30.649354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-07 00:56:30.649364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-07 00:56:30.649368 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.649372 | orchestrator | 2026-01-07 00:56:30.649377 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-07 00:56:30.649382 | orchestrator | Wednesday 07 January 2026 00:53:47 +0000 (0:00:01.432) 0:03:40.298 ***** 2026-01-07 00:56:30.649389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-07 00:56:30.649394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-07 00:56:30.649399 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.649485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-07 00:56:30.649492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-07 00:56:30.649500 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.649505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-07 00:56:30.649509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-07 00:56:30.649518 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.649523 | orchestrator | 2026-01-07 00:56:30.649527 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-07 00:56:30.649531 | orchestrator | Wednesday 07 January 2026 00:53:49 +0000 (0:00:02.092) 0:03:42.391 ***** 2026-01-07 00:56:30.649536 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.649540 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.649545 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.649550 | orchestrator | 2026-01-07 00:56:30.649554 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-07 00:56:30.649559 | orchestrator | Wednesday 07 January 2026 00:53:51 +0000 (0:00:01.415) 0:03:43.806 ***** 2026-01-07 00:56:30.649563 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.649566 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.649570 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.649574 | orchestrator | 2026-01-07 00:56:30.649578 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-07 00:56:30.649581 | orchestrator | Wednesday 07 January 2026 00:53:53 +0000 (0:00:02.218) 0:03:46.025 ***** 2026-01-07 00:56:30.649585 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.649589 | orchestrator | 2026-01-07 00:56:30.649593 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-07 00:56:30.649596 | orchestrator | Wednesday 07 January 2026 00:53:54 +0000 (0:00:01.185) 0:03:47.211 ***** 2026-01-07 00:56:30.649601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.649605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.649619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.649623 | orchestrator | 2026-01-07 00:56:30.649627 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-07 00:56:30.649631 | orchestrator | Wednesday 07 January 2026 00:53:58 +0000 (0:00:03.873) 0:03:51.084 ***** 2026-01-07 00:56:30.649635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.649638 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.649642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.649646 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.649650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.649657 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.649661 | orchestrator | 2026-01-07 00:56:30.649665 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-07 00:56:30.649669 | orchestrator | Wednesday 07 January 2026 00:53:59 +0000 (0:00:00.540) 0:03:51.625 ***** 2026-01-07 00:56:30.649675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-07 00:56:30.649679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-07 00:56:30.649683 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.649689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-07 00:56:30.649693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-07 00:56:30.649697 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.649701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-07 00:56:30.649705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-07 00:56:30.649709 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.649713 | orchestrator | 2026-01-07 00:56:30.649717 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-07 00:56:30.649720 | orchestrator | Wednesday 07 January 2026 00:53:59 +0000 (0:00:00.769) 0:03:52.395 ***** 2026-01-07 00:56:30.649724 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.649728 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.649732 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.649736 | orchestrator | 2026-01-07 00:56:30.649739 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-07 00:56:30.649743 | orchestrator | Wednesday 07 January 2026 00:54:01 +0000 (0:00:01.362) 0:03:53.757 ***** 2026-01-07 00:56:30.649747 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.649751 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.649754 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.649758 | orchestrator | 2026-01-07 00:56:30.649762 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-07 00:56:30.649766 | orchestrator | Wednesday 07 January 2026 00:54:03 +0000 (0:00:02.287) 0:03:56.044 ***** 2026-01-07 00:56:30.649770 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.649773 | orchestrator | 2026-01-07 00:56:30.649777 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-07 00:56:30.649781 | orchestrator | Wednesday 07 January 2026 00:54:05 +0000 (0:00:01.491) 0:03:57.536 ***** 2026-01-07 00:56:30.649785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.649796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.649828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.649849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649857 | orchestrator | 2026-01-07 00:56:30.649861 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-07 00:56:30.649865 | orchestrator | Wednesday 07 January 2026 00:54:09 +0000 (0:00:04.358) 0:04:01.895 ***** 2026-01-07 00:56:30.649869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.649876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649886 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.649894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.649898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649910 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.649914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.649921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.649931 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.649935 | orchestrator | 2026-01-07 00:56:30.649939 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-07 00:56:30.649942 | orchestrator | Wednesday 07 January 2026 00:54:10 +0000 (0:00:01.059) 0:04:02.955 ***** 2026-01-07 00:56:30.649947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-07 00:56:30.649951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-07 00:56:30.649955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-07 00:56:30.649962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-07 00:56:30.649966 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.649970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-07 00:56:30.649973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-07 00:56:30.649977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-07 00:56:30.649981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-07 00:56:30.649985 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.649989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-07 00:56:30.649992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-07 00:56:30.649996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-07 00:56:30.650000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-07 00:56:30.650004 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.650008 | orchestrator | 2026-01-07 00:56:30.650069 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-07 00:56:30.650079 | orchestrator | Wednesday 07 January 2026 00:54:11 +0000 (0:00:01.305) 0:04:04.260 ***** 2026-01-07 00:56:30.650083 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.650086 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.650090 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.650094 | orchestrator | 2026-01-07 00:56:30.650098 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-07 00:56:30.650102 | orchestrator | Wednesday 07 January 2026 00:54:13 +0000 (0:00:01.486) 0:04:05.747 ***** 2026-01-07 00:56:30.650105 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.650110 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.650113 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.650117 | orchestrator | 2026-01-07 00:56:30.650134 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-07 00:56:30.650138 | orchestrator | Wednesday 07 January 2026 00:54:15 +0000 (0:00:02.206) 0:04:07.953 ***** 2026-01-07 00:56:30.650142 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.650146 | orchestrator | 2026-01-07 00:56:30.650149 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-07 00:56:30.650153 | orchestrator | Wednesday 07 January 2026 00:54:17 +0000 (0:00:01.582) 0:04:09.536 ***** 2026-01-07 00:56:30.650157 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-07 00:56:30.650164 | orchestrator | 2026-01-07 00:56:30.650168 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-07 00:56:30.650172 | orchestrator | Wednesday 07 January 2026 00:54:17 +0000 (0:00:00.848) 0:04:10.385 ***** 2026-01-07 00:56:30.650176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-07 00:56:30.650181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-07 00:56:30.650185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-07 00:56:30.650189 | orchestrator | 2026-01-07 00:56:30.650193 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-07 00:56:30.650197 | orchestrator | Wednesday 07 January 2026 00:54:22 +0000 (0:00:04.587) 0:04:14.972 ***** 2026-01-07 00:56:30.650201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:56:30.650205 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.650209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:56:30.650213 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.650217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:56:30.650221 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.650225 | orchestrator | 2026-01-07 00:56:30.650239 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-07 00:56:30.650247 | orchestrator | Wednesday 07 January 2026 00:54:23 +0000 (0:00:01.404) 0:04:16.377 ***** 2026-01-07 00:56:30.650251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:56:30.650255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:56:30.650260 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.650264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:56:30.650268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:56:30.650272 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.650276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:56:30.650334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-07 00:56:30.650348 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.650352 | orchestrator | 2026-01-07 00:56:30.650356 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-07 00:56:30.650359 | orchestrator | Wednesday 07 January 2026 00:54:25 +0000 (0:00:01.357) 0:04:17.734 ***** 2026-01-07 00:56:30.650363 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.650367 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.650371 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.650375 | orchestrator | 2026-01-07 00:56:30.650378 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-07 00:56:30.650382 | orchestrator | Wednesday 07 January 2026 00:54:27 +0000 (0:00:02.238) 0:04:19.973 ***** 2026-01-07 00:56:30.650387 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.650390 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.650394 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.650398 | orchestrator | 2026-01-07 00:56:30.650401 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-07 00:56:30.650405 | orchestrator | Wednesday 07 January 2026 00:54:30 +0000 (0:00:02.991) 0:04:22.965 ***** 2026-01-07 00:56:30.650410 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-07 00:56:30.650413 | orchestrator | 2026-01-07 00:56:30.650417 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-07 00:56:30.650421 | orchestrator | Wednesday 07 January 2026 00:54:32 +0000 (0:00:01.588) 0:04:24.554 ***** 2026-01-07 00:56:30.650425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:56:30.650433 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.650440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:56:30.650444 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.650463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:56:30.650468 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.650472 | orchestrator | 2026-01-07 00:56:30.650475 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-07 00:56:30.650479 | orchestrator | Wednesday 07 January 2026 00:54:33 +0000 (0:00:01.345) 0:04:25.899 ***** 2026-01-07 00:56:30.650483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:56:30.650487 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.650491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:56:30.650495 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.650499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-07 00:56:30.650503 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.650506 | orchestrator | 2026-01-07 00:56:30.650510 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-07 00:56:30.650514 | orchestrator | Wednesday 07 January 2026 00:54:34 +0000 (0:00:01.363) 0:04:27.263 ***** 2026-01-07 00:56:30.650518 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.650521 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.650525 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.650529 | orchestrator | 2026-01-07 00:56:30.650533 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-07 00:56:30.650540 | orchestrator | Wednesday 07 January 2026 00:54:36 +0000 (0:00:02.007) 0:04:29.270 ***** 2026-01-07 00:56:30.650544 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:30.650548 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:30.650551 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:30.650555 | orchestrator | 2026-01-07 00:56:30.650559 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-07 00:56:30.650563 | orchestrator | Wednesday 07 January 2026 00:54:39 +0000 (0:00:02.275) 0:04:31.545 ***** 2026-01-07 00:56:30.650566 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:30.650570 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:30.650574 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:30.650578 | orchestrator | 2026-01-07 00:56:30.650581 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-07 00:56:30.650586 | orchestrator | Wednesday 07 January 2026 00:54:42 +0000 (0:00:02.913) 0:04:34.459 ***** 2026-01-07 00:56:30.650589 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-07 00:56:30.650593 | orchestrator | 2026-01-07 00:56:30.650597 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-07 00:56:30.650601 | orchestrator | Wednesday 07 January 2026 00:54:42 +0000 (0:00:00.896) 0:04:35.356 ***** 2026-01-07 00:56:30.650620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:56:30.650625 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.650629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:56:30.650633 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.650637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:56:30.650641 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.650644 | orchestrator | 2026-01-07 00:56:30.650648 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-07 00:56:30.650652 | orchestrator | Wednesday 07 January 2026 00:54:44 +0000 (0:00:01.388) 0:04:36.744 ***** 2026-01-07 00:56:30.650656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:56:30.650664 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.650668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:56:30.650672 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.650676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-07 00:56:30.650680 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.650683 | orchestrator | 2026-01-07 00:56:30.650687 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-07 00:56:30.650691 | orchestrator | Wednesday 07 January 2026 00:54:45 +0000 (0:00:01.380) 0:04:38.124 ***** 2026-01-07 00:56:30.650695 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.650698 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.650702 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.650706 | orchestrator | 2026-01-07 00:56:30.650709 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-07 00:56:30.650713 | orchestrator | Wednesday 07 January 2026 00:54:47 +0000 (0:00:01.565) 0:04:39.689 ***** 2026-01-07 00:56:30.650717 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:30.650721 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:30.650725 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:30.650728 | orchestrator | 2026-01-07 00:56:30.650735 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-07 00:56:30.650739 | orchestrator | Wednesday 07 January 2026 00:54:49 +0000 (0:00:02.319) 0:04:42.009 ***** 2026-01-07 00:56:30.650743 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:30.650747 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:30.650751 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:30.650754 | orchestrator | 2026-01-07 00:56:30.650758 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-07 00:56:30.650762 | orchestrator | Wednesday 07 January 2026 00:54:52 +0000 (0:00:03.382) 0:04:45.392 ***** 2026-01-07 00:56:30.650777 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.650781 | orchestrator | 2026-01-07 00:56:30.650785 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-07 00:56:30.650789 | orchestrator | Wednesday 07 January 2026 00:54:54 +0000 (0:00:01.613) 0:04:47.005 ***** 2026-01-07 00:56:30.650793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.650819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:56:30.650827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.650834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.650844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.650866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.650870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:56:30.650878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.650882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.650886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.650892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.650908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:56:30.650912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.650919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.650923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.650928 | orchestrator | 2026-01-07 00:56:30.650931 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-07 00:56:30.650935 | orchestrator | Wednesday 07 January 2026 00:54:58 +0000 (0:00:03.616) 0:04:50.622 ***** 2026-01-07 00:56:30.650940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.650947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:56:30.650963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.650968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.650975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.650979 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.650983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.650987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:56:30.650991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.651011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.651019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.651023 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.651027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.651031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 00:56:30.651035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.651039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 00:56:30.651056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 00:56:30.651064 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.651068 | orchestrator | 2026-01-07 00:56:30.651072 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-07 00:56:30.651075 | orchestrator | Wednesday 07 January 2026 00:54:58 +0000 (0:00:00.721) 0:04:51.344 ***** 2026-01-07 00:56:30.651079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:56:30.651084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:56:30.651088 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.651091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:56:30.651095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:56:30.651099 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.651103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:56:30.651107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-07 00:56:30.651111 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.651114 | orchestrator | 2026-01-07 00:56:30.651118 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-07 00:56:30.651122 | orchestrator | Wednesday 07 January 2026 00:55:00 +0000 (0:00:01.571) 0:04:52.915 ***** 2026-01-07 00:56:30.651125 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.651129 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.651133 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.651137 | orchestrator | 2026-01-07 00:56:30.651140 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-07 00:56:30.651144 | orchestrator | Wednesday 07 January 2026 00:55:02 +0000 (0:00:01.529) 0:04:54.445 ***** 2026-01-07 00:56:30.651148 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.651152 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.651155 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.651159 | orchestrator | 2026-01-07 00:56:30.651163 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-07 00:56:30.651166 | orchestrator | Wednesday 07 January 2026 00:55:04 +0000 (0:00:02.257) 0:04:56.702 ***** 2026-01-07 00:56:30.651170 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.651174 | orchestrator | 2026-01-07 00:56:30.651178 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-07 00:56:30.651181 | orchestrator | Wednesday 07 January 2026 00:55:05 +0000 (0:00:01.378) 0:04:58.080 ***** 2026-01-07 00:56:30.651186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:56:30.651209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:56:30.651214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:56:30.651219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:56:30.651224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:56:30.651245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:56:30.651250 | orchestrator | 2026-01-07 00:56:30.651254 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-07 00:56:30.651258 | orchestrator | Wednesday 07 January 2026 00:55:11 +0000 (0:00:05.455) 0:05:03.535 ***** 2026-01-07 00:56:30.651262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 00:56:30.651267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 00:56:30.651271 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.651275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 00:56:30.651295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 00:56:30.651300 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.651304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 00:56:30.651308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 00:56:30.651312 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.651316 | orchestrator | 2026-01-07 00:56:30.651320 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-07 00:56:30.651324 | orchestrator | Wednesday 07 January 2026 00:55:11 +0000 (0:00:00.658) 0:05:04.194 ***** 2026-01-07 00:56:30.651328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-07 00:56:30.651335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-07 00:56:30.651339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-07 00:56:30.651343 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.651347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-07 00:56:30.651351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-07 00:56:30.651358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-07 00:56:30.651362 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.651366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-07 00:56:30.651380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-07 00:56:30.651385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-07 00:56:30.651389 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.651393 | orchestrator | 2026-01-07 00:56:30.651397 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-07 00:56:30.651401 | orchestrator | Wednesday 07 January 2026 00:55:12 +0000 (0:00:00.924) 0:05:05.119 ***** 2026-01-07 00:56:30.651405 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.651408 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.651412 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.651416 | orchestrator | 2026-01-07 00:56:30.651420 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-07 00:56:30.651424 | orchestrator | Wednesday 07 January 2026 00:55:13 +0000 (0:00:00.837) 0:05:05.957 ***** 2026-01-07 00:56:30.651427 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.651431 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.651435 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.651438 | orchestrator | 2026-01-07 00:56:30.651442 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-07 00:56:30.651446 | orchestrator | Wednesday 07 January 2026 00:55:14 +0000 (0:00:01.311) 0:05:07.269 ***** 2026-01-07 00:56:30.651450 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.651454 | orchestrator | 2026-01-07 00:56:30.651457 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-07 00:56:30.651461 | orchestrator | Wednesday 07 January 2026 00:55:16 +0000 (0:00:01.409) 0:05:08.678 ***** 2026-01-07 00:56:30.651466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-07 00:56:30.651473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:56:30.651477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:56:30.651507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-07 00:56:30.651511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:56:30.651518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-07 00:56:30.651526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:56:30.651534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:56:30.651559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:56:30.651571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-07 00:56:30.651577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-07 00:56:30.651584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:56:30.651599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-07 00:56:30.651604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-07 00:56:30.651611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:56:30.651633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-07 00:56:30.651652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-07 00:56:30.651660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:56:30.651682 | orchestrator | 2026-01-07 00:56:30.651691 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-07 00:56:30.651698 | orchestrator | Wednesday 07 January 2026 00:55:20 +0000 (0:00:04.612) 0:05:13.291 ***** 2026-01-07 00:56:30.651705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-07 00:56:30.651717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:56:30.651724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:56:30.651748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-07 00:56:30.651758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-07 00:56:30.651767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:56:30.651779 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.651783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-07 00:56:30.651790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:56:30.651797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:56:30.651852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-07 00:56:30.651856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-07 00:56:30.651863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-07 00:56:30.651871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 00:56:30.651883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:56:30.651891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651895 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.651899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 00:56:30.651914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-07 00:56:30.651924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-07 00:56:30.651928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 00:56:30.651936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 00:56:30.651940 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.651943 | orchestrator | 2026-01-07 00:56:30.651947 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-07 00:56:30.651951 | orchestrator | Wednesday 07 January 2026 00:55:22 +0000 (0:00:01.417) 0:05:14.709 ***** 2026-01-07 00:56:30.651955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-07 00:56:30.651959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-07 00:56:30.651966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-07 00:56:30.651975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-07 00:56:30.651980 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.651986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-07 00:56:30.651990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-07 00:56:30.651994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-07 00:56:30.651998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-07 00:56:30.652002 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.652005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-07 00:56:30.652009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-07 00:56:30.652013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-07 00:56:30.652017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-07 00:56:30.652021 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.652025 | orchestrator | 2026-01-07 00:56:30.652029 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-07 00:56:30.652033 | orchestrator | Wednesday 07 January 2026 00:55:23 +0000 (0:00:01.044) 0:05:15.753 ***** 2026-01-07 00:56:30.652037 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.652040 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.652044 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.652048 | orchestrator | 2026-01-07 00:56:30.652052 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-07 00:56:30.652056 | orchestrator | Wednesday 07 January 2026 00:55:23 +0000 (0:00:00.454) 0:05:16.207 ***** 2026-01-07 00:56:30.652059 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.652063 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.652067 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.652071 | orchestrator | 2026-01-07 00:56:30.652074 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-07 00:56:30.652078 | orchestrator | Wednesday 07 January 2026 00:55:25 +0000 (0:00:01.534) 0:05:17.742 ***** 2026-01-07 00:56:30.652085 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.652089 | orchestrator | 2026-01-07 00:56:30.652093 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-07 00:56:30.652097 | orchestrator | Wednesday 07 January 2026 00:55:27 +0000 (0:00:01.773) 0:05:19.515 ***** 2026-01-07 00:56:30.652106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:56:30.652110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:56:30.652115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-07 00:56:30.652119 | orchestrator | 2026-01-07 00:56:30.652123 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-07 00:56:30.652127 | orchestrator | Wednesday 07 January 2026 00:55:29 +0000 (0:00:02.642) 0:05:22.158 ***** 2026-01-07 00:56:30.652131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:56:30.652144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:56:30.652149 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.652153 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.652157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-07 00:56:30.652161 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.652165 | orchestrator | 2026-01-07 00:56:30.652169 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-07 00:56:30.652173 | orchestrator | Wednesday 07 January 2026 00:55:30 +0000 (0:00:00.420) 0:05:22.578 ***** 2026-01-07 00:56:30.652178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-07 00:56:30.652182 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.652186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-07 00:56:30.652190 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.652194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-07 00:56:30.652198 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.652202 | orchestrator | 2026-01-07 00:56:30.652206 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-07 00:56:30.652214 | orchestrator | Wednesday 07 January 2026 00:55:31 +0000 (0:00:01.076) 0:05:23.654 ***** 2026-01-07 00:56:30.652218 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.652221 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.652225 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.652231 | orchestrator | 2026-01-07 00:56:30.652238 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-07 00:56:30.652244 | orchestrator | Wednesday 07 January 2026 00:55:31 +0000 (0:00:00.462) 0:05:24.117 ***** 2026-01-07 00:56:30.652250 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.652257 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.652264 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.652270 | orchestrator | 2026-01-07 00:56:30.652276 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-07 00:56:30.652283 | orchestrator | Wednesday 07 January 2026 00:55:33 +0000 (0:00:01.428) 0:05:25.545 ***** 2026-01-07 00:56:30.652289 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:56:30.652295 | orchestrator | 2026-01-07 00:56:30.652301 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-07 00:56:30.652308 | orchestrator | Wednesday 07 January 2026 00:55:34 +0000 (0:00:01.785) 0:05:27.331 ***** 2026-01-07 00:56:30.652318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.652330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.652337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.652351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.652358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.652372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-07 00:56:30.652379 | orchestrator | 2026-01-07 00:56:30.652386 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-07 00:56:30.652393 | orchestrator | Wednesday 07 January 2026 00:55:41 +0000 (0:00:06.274) 0:05:33.605 ***** 2026-01-07 00:56:30.652399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.652411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.652417 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.652424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.652438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.652445 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.652452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.652463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-07 00:56:30.652470 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.652476 | orchestrator | 2026-01-07 00:56:30.652483 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-07 00:56:30.652490 | orchestrator | Wednesday 07 January 2026 00:55:41 +0000 (0:00:00.625) 0:05:34.230 ***** 2026-01-07 00:56:30.652496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-07 00:56:30.652503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-07 00:56:30.652510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-07 00:56:30.652517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-07 00:56:30.652523 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.652530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-07 00:56:30.652539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-07 00:56:30.652546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-07 00:56:30.652556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-07 00:56:30.652562 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.652569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-07 00:56:30.652575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-07 00:56:30.652588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-07 00:56:30.652595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-07 00:56:30.652601 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.652608 | orchestrator | 2026-01-07 00:56:30.652615 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-07 00:56:30.652621 | orchestrator | Wednesday 07 January 2026 00:55:43 +0000 (0:00:01.687) 0:05:35.918 ***** 2026-01-07 00:56:30.652628 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.652634 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.652641 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.652648 | orchestrator | 2026-01-07 00:56:30.652654 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-07 00:56:30.652660 | orchestrator | Wednesday 07 January 2026 00:55:44 +0000 (0:00:01.405) 0:05:37.324 ***** 2026-01-07 00:56:30.652667 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.652674 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.652680 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.652686 | orchestrator | 2026-01-07 00:56:30.652693 | orchestrator | TASK [include_role : swift] **************************************************** 2026-01-07 00:56:30.652700 | orchestrator | Wednesday 07 January 2026 00:55:47 +0000 (0:00:02.241) 0:05:39.566 ***** 2026-01-07 00:56:30.652707 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.652713 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.652720 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.652726 | orchestrator | 2026-01-07 00:56:30.652733 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-07 00:56:30.652740 | orchestrator | Wednesday 07 January 2026 00:55:47 +0000 (0:00:00.323) 0:05:39.890 ***** 2026-01-07 00:56:30.652746 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.652753 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.652760 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.652766 | orchestrator | 2026-01-07 00:56:30.652773 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-07 00:56:30.652780 | orchestrator | Wednesday 07 January 2026 00:55:47 +0000 (0:00:00.307) 0:05:40.197 ***** 2026-01-07 00:56:30.652787 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.652793 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.652800 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.652825 | orchestrator | 2026-01-07 00:56:30.652831 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-07 00:56:30.652837 | orchestrator | Wednesday 07 January 2026 00:55:48 +0000 (0:00:00.645) 0:05:40.843 ***** 2026-01-07 00:56:30.652844 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.652850 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.652856 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.652862 | orchestrator | 2026-01-07 00:56:30.652869 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-07 00:56:30.652875 | orchestrator | Wednesday 07 January 2026 00:55:48 +0000 (0:00:00.335) 0:05:41.178 ***** 2026-01-07 00:56:30.652879 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.652883 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.652887 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.652891 | orchestrator | 2026-01-07 00:56:30.652895 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-07 00:56:30.652899 | orchestrator | Wednesday 07 January 2026 00:55:49 +0000 (0:00:00.317) 0:05:41.496 ***** 2026-01-07 00:56:30.652903 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.652907 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.652915 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.652919 | orchestrator | 2026-01-07 00:56:30.652923 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-07 00:56:30.652927 | orchestrator | Wednesday 07 January 2026 00:55:49 +0000 (0:00:00.864) 0:05:42.361 ***** 2026-01-07 00:56:30.652931 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:30.652935 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:30.652939 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:30.652943 | orchestrator | 2026-01-07 00:56:30.652947 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-07 00:56:30.652954 | orchestrator | Wednesday 07 January 2026 00:55:50 +0000 (0:00:00.738) 0:05:43.100 ***** 2026-01-07 00:56:30.652958 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:30.652962 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:30.652966 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:30.652970 | orchestrator | 2026-01-07 00:56:30.652974 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-07 00:56:30.652978 | orchestrator | Wednesday 07 January 2026 00:55:51 +0000 (0:00:00.353) 0:05:43.454 ***** 2026-01-07 00:56:30.652982 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:30.652985 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:30.652989 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:30.652993 | orchestrator | 2026-01-07 00:56:30.653001 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-07 00:56:30.653005 | orchestrator | Wednesday 07 January 2026 00:55:52 +0000 (0:00:00.985) 0:05:44.439 ***** 2026-01-07 00:56:30.653009 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:30.653012 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:30.653016 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:30.653020 | orchestrator | 2026-01-07 00:56:30.653024 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-07 00:56:30.653028 | orchestrator | Wednesday 07 January 2026 00:55:53 +0000 (0:00:01.220) 0:05:45.660 ***** 2026-01-07 00:56:30.653032 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:30.653036 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:30.653040 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:30.653044 | orchestrator | 2026-01-07 00:56:30.653048 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-07 00:56:30.653052 | orchestrator | Wednesday 07 January 2026 00:55:54 +0000 (0:00:00.952) 0:05:46.612 ***** 2026-01-07 00:56:30.653056 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.653060 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.653064 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.653068 | orchestrator | 2026-01-07 00:56:30.653071 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-07 00:56:30.653075 | orchestrator | Wednesday 07 January 2026 00:55:59 +0000 (0:00:04.885) 0:05:51.498 ***** 2026-01-07 00:56:30.653079 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:30.653083 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:30.653087 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:30.653091 | orchestrator | 2026-01-07 00:56:30.653095 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-07 00:56:30.653099 | orchestrator | Wednesday 07 January 2026 00:56:01 +0000 (0:00:02.769) 0:05:54.268 ***** 2026-01-07 00:56:30.653103 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.653107 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.653111 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.653115 | orchestrator | 2026-01-07 00:56:30.653119 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-07 00:56:30.653123 | orchestrator | Wednesday 07 January 2026 00:56:10 +0000 (0:00:08.777) 0:06:03.046 ***** 2026-01-07 00:56:30.653127 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:30.653131 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:30.653135 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:30.653138 | orchestrator | 2026-01-07 00:56:30.653142 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-07 00:56:30.653150 | orchestrator | Wednesday 07 January 2026 00:56:14 +0000 (0:00:04.226) 0:06:07.272 ***** 2026-01-07 00:56:30.653154 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:56:30.653158 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:56:30.653162 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:56:30.653166 | orchestrator | 2026-01-07 00:56:30.653170 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-07 00:56:30.653174 | orchestrator | Wednesday 07 January 2026 00:56:24 +0000 (0:00:09.406) 0:06:16.679 ***** 2026-01-07 00:56:30.653177 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.653181 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.653185 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.653189 | orchestrator | 2026-01-07 00:56:30.653193 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-07 00:56:30.653197 | orchestrator | Wednesday 07 January 2026 00:56:24 +0000 (0:00:00.391) 0:06:17.070 ***** 2026-01-07 00:56:30.653201 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.653205 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.653209 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.653213 | orchestrator | 2026-01-07 00:56:30.653217 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-07 00:56:30.653221 | orchestrator | Wednesday 07 January 2026 00:56:25 +0000 (0:00:00.352) 0:06:17.423 ***** 2026-01-07 00:56:30.653225 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.653228 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.653232 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.653236 | orchestrator | 2026-01-07 00:56:30.653240 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-07 00:56:30.653244 | orchestrator | Wednesday 07 January 2026 00:56:25 +0000 (0:00:00.737) 0:06:18.160 ***** 2026-01-07 00:56:30.653248 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.653252 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.653256 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.653260 | orchestrator | 2026-01-07 00:56:30.653264 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-07 00:56:30.653268 | orchestrator | Wednesday 07 January 2026 00:56:26 +0000 (0:00:00.372) 0:06:18.533 ***** 2026-01-07 00:56:30.653272 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.653275 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.653279 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.653283 | orchestrator | 2026-01-07 00:56:30.653287 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-07 00:56:30.653291 | orchestrator | Wednesday 07 January 2026 00:56:26 +0000 (0:00:00.376) 0:06:18.910 ***** 2026-01-07 00:56:30.653295 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:56:30.653299 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:56:30.653303 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:56:30.653307 | orchestrator | 2026-01-07 00:56:30.653311 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-07 00:56:30.653315 | orchestrator | Wednesday 07 January 2026 00:56:26 +0000 (0:00:00.364) 0:06:19.274 ***** 2026-01-07 00:56:30.653321 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:30.653325 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:30.653329 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:30.653333 | orchestrator | 2026-01-07 00:56:30.653337 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-07 00:56:30.653341 | orchestrator | Wednesday 07 January 2026 00:56:28 +0000 (0:00:01.417) 0:06:20.692 ***** 2026-01-07 00:56:30.653345 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:56:30.653349 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:56:30.653353 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:56:30.653357 | orchestrator | 2026-01-07 00:56:30.653361 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:56:30.653370 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-07 00:56:30.653375 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-07 00:56:30.653379 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-07 00:56:30.653383 | orchestrator | 2026-01-07 00:56:30.653387 | orchestrator | 2026-01-07 00:56:30.653391 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:56:30.653395 | orchestrator | Wednesday 07 January 2026 00:56:29 +0000 (0:00:00.861) 0:06:21.553 ***** 2026-01-07 00:56:30.653399 | orchestrator | =============================================================================== 2026-01-07 00:56:30.653403 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.41s 2026-01-07 00:56:30.653407 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.78s 2026-01-07 00:56:30.653411 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.27s 2026-01-07 00:56:30.653414 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.72s 2026-01-07 00:56:30.653418 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.46s 2026-01-07 00:56:30.653422 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 5.39s 2026-01-07 00:56:30.653426 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.09s 2026-01-07 00:56:30.653430 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.89s 2026-01-07 00:56:30.653434 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.61s 2026-01-07 00:56:30.653438 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.59s 2026-01-07 00:56:30.653442 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.47s 2026-01-07 00:56:30.653446 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.44s 2026-01-07 00:56:30.653450 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.41s 2026-01-07 00:56:30.653453 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.36s 2026-01-07 00:56:30.653457 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.25s 2026-01-07 00:56:30.653461 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.23s 2026-01-07 00:56:30.653465 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.23s 2026-01-07 00:56:30.653469 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.05s 2026-01-07 00:56:30.653473 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.93s 2026-01-07 00:56:30.653477 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 3.88s 2026-01-07 00:56:30.653481 | orchestrator | 2026-01-07 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:33.685733 | orchestrator | 2026-01-07 00:56:33 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:56:33.686907 | orchestrator | 2026-01-07 00:56:33 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:56:33.688036 | orchestrator | 2026-01-07 00:56:33 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:33.688217 | orchestrator | 2026-01-07 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:36.729072 | orchestrator | 2026-01-07 00:56:36 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:56:36.729170 | orchestrator | 2026-01-07 00:56:36 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:56:36.730427 | orchestrator | 2026-01-07 00:56:36 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:36.730469 | orchestrator | 2026-01-07 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:39.769021 | orchestrator | 2026-01-07 00:56:39 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:56:39.769610 | orchestrator | 2026-01-07 00:56:39 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:56:39.770655 | orchestrator | 2026-01-07 00:56:39 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:39.770701 | orchestrator | 2026-01-07 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:42.802181 | orchestrator | 2026-01-07 00:56:42 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:56:42.809140 | orchestrator | 2026-01-07 00:56:42 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:56:42.809284 | orchestrator | 2026-01-07 00:56:42 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:42.809362 | orchestrator | 2026-01-07 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:45.858469 | orchestrator | 2026-01-07 00:56:45 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:56:45.859198 | orchestrator | 2026-01-07 00:56:45 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:56:45.861706 | orchestrator | 2026-01-07 00:56:45 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:45.861740 | orchestrator | 2026-01-07 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:48.901380 | orchestrator | 2026-01-07 00:56:48 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:56:48.903075 | orchestrator | 2026-01-07 00:56:48 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:56:48.905021 | orchestrator | 2026-01-07 00:56:48 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:48.905069 | orchestrator | 2026-01-07 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:51.943692 | orchestrator | 2026-01-07 00:56:51 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:56:51.943842 | orchestrator | 2026-01-07 00:56:51 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:56:51.945552 | orchestrator | 2026-01-07 00:56:51 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:51.945601 | orchestrator | 2026-01-07 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:54.992016 | orchestrator | 2026-01-07 00:56:54 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:56:54.992477 | orchestrator | 2026-01-07 00:56:54 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:56:54.993398 | orchestrator | 2026-01-07 00:56:54 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:54.993506 | orchestrator | 2026-01-07 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:56:58.021374 | orchestrator | 2026-01-07 00:56:58 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:56:58.021450 | orchestrator | 2026-01-07 00:56:58 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:56:58.022322 | orchestrator | 2026-01-07 00:56:58 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:56:58.022402 | orchestrator | 2026-01-07 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:01.059881 | orchestrator | 2026-01-07 00:57:01 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:01.060974 | orchestrator | 2026-01-07 00:57:01 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:01.062862 | orchestrator | 2026-01-07 00:57:01 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:01.062916 | orchestrator | 2026-01-07 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:04.109657 | orchestrator | 2026-01-07 00:57:04 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:04.112428 | orchestrator | 2026-01-07 00:57:04 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:04.116449 | orchestrator | 2026-01-07 00:57:04 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:04.116516 | orchestrator | 2026-01-07 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:07.161576 | orchestrator | 2026-01-07 00:57:07 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:07.164527 | orchestrator | 2026-01-07 00:57:07 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:07.166922 | orchestrator | 2026-01-07 00:57:07 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:07.166970 | orchestrator | 2026-01-07 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:10.211890 | orchestrator | 2026-01-07 00:57:10 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:10.214229 | orchestrator | 2026-01-07 00:57:10 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:10.216398 | orchestrator | 2026-01-07 00:57:10 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:10.216476 | orchestrator | 2026-01-07 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:13.271613 | orchestrator | 2026-01-07 00:57:13 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:13.273209 | orchestrator | 2026-01-07 00:57:13 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:13.275064 | orchestrator | 2026-01-07 00:57:13 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:13.275112 | orchestrator | 2026-01-07 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:16.315579 | orchestrator | 2026-01-07 00:57:16 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:16.315964 | orchestrator | 2026-01-07 00:57:16 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:16.317295 | orchestrator | 2026-01-07 00:57:16 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:16.317335 | orchestrator | 2026-01-07 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:19.365893 | orchestrator | 2026-01-07 00:57:19 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:19.369131 | orchestrator | 2026-01-07 00:57:19 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:19.371041 | orchestrator | 2026-01-07 00:57:19 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:19.371263 | orchestrator | 2026-01-07 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:22.424590 | orchestrator | 2026-01-07 00:57:22 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:22.424723 | orchestrator | 2026-01-07 00:57:22 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:22.429482 | orchestrator | 2026-01-07 00:57:22 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:22.429547 | orchestrator | 2026-01-07 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:25.476692 | orchestrator | 2026-01-07 00:57:25 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:25.478217 | orchestrator | 2026-01-07 00:57:25 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:25.480064 | orchestrator | 2026-01-07 00:57:25 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:25.480135 | orchestrator | 2026-01-07 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:28.536737 | orchestrator | 2026-01-07 00:57:28 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:28.538580 | orchestrator | 2026-01-07 00:57:28 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:28.539853 | orchestrator | 2026-01-07 00:57:28 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:28.539905 | orchestrator | 2026-01-07 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:31.593414 | orchestrator | 2026-01-07 00:57:31 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:31.595387 | orchestrator | 2026-01-07 00:57:31 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:31.597576 | orchestrator | 2026-01-07 00:57:31 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:31.597613 | orchestrator | 2026-01-07 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:34.637138 | orchestrator | 2026-01-07 00:57:34 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:34.722067 | orchestrator | 2026-01-07 00:57:34 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:34.722155 | orchestrator | 2026-01-07 00:57:34 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:34.722168 | orchestrator | 2026-01-07 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:37.688321 | orchestrator | 2026-01-07 00:57:37 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:37.690269 | orchestrator | 2026-01-07 00:57:37 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:37.691567 | orchestrator | 2026-01-07 00:57:37 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:37.691729 | orchestrator | 2026-01-07 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:40.737312 | orchestrator | 2026-01-07 00:57:40 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:40.738159 | orchestrator | 2026-01-07 00:57:40 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:40.738756 | orchestrator | 2026-01-07 00:57:40 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:40.738804 | orchestrator | 2026-01-07 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:43.783727 | orchestrator | 2026-01-07 00:57:43 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:43.787223 | orchestrator | 2026-01-07 00:57:43 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:43.788064 | orchestrator | 2026-01-07 00:57:43 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:43.788222 | orchestrator | 2026-01-07 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:46.833833 | orchestrator | 2026-01-07 00:57:46 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:46.836056 | orchestrator | 2026-01-07 00:57:46 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:46.837024 | orchestrator | 2026-01-07 00:57:46 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:46.837248 | orchestrator | 2026-01-07 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:49.877022 | orchestrator | 2026-01-07 00:57:49 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:49.880231 | orchestrator | 2026-01-07 00:57:49 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:49.882439 | orchestrator | 2026-01-07 00:57:49 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:49.882507 | orchestrator | 2026-01-07 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:52.935175 | orchestrator | 2026-01-07 00:57:52 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:52.936838 | orchestrator | 2026-01-07 00:57:52 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:52.939983 | orchestrator | 2026-01-07 00:57:52 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:52.940042 | orchestrator | 2026-01-07 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:55.983673 | orchestrator | 2026-01-07 00:57:55 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:55.983952 | orchestrator | 2026-01-07 00:57:55 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:55.986681 | orchestrator | 2026-01-07 00:57:55 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:55.986723 | orchestrator | 2026-01-07 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:57:59.040440 | orchestrator | 2026-01-07 00:57:59 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:57:59.042084 | orchestrator | 2026-01-07 00:57:59 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:57:59.042736 | orchestrator | 2026-01-07 00:57:59 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:57:59.042834 | orchestrator | 2026-01-07 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:02.097524 | orchestrator | 2026-01-07 00:58:02 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:02.099269 | orchestrator | 2026-01-07 00:58:02 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:02.101384 | orchestrator | 2026-01-07 00:58:02 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:58:02.101441 | orchestrator | 2026-01-07 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:05.142065 | orchestrator | 2026-01-07 00:58:05 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:05.144356 | orchestrator | 2026-01-07 00:58:05 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:05.145972 | orchestrator | 2026-01-07 00:58:05 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:58:05.146304 | orchestrator | 2026-01-07 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:08.216200 | orchestrator | 2026-01-07 00:58:08 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:08.218398 | orchestrator | 2026-01-07 00:58:08 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:08.220620 | orchestrator | 2026-01-07 00:58:08 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:58:08.220709 | orchestrator | 2026-01-07 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:11.283106 | orchestrator | 2026-01-07 00:58:11 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:11.285053 | orchestrator | 2026-01-07 00:58:11 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:11.286403 | orchestrator | 2026-01-07 00:58:11 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:58:11.286624 | orchestrator | 2026-01-07 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:14.340334 | orchestrator | 2026-01-07 00:58:14 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:14.342174 | orchestrator | 2026-01-07 00:58:14 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:14.343764 | orchestrator | 2026-01-07 00:58:14 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:58:14.343804 | orchestrator | 2026-01-07 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:17.394129 | orchestrator | 2026-01-07 00:58:17 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:17.396277 | orchestrator | 2026-01-07 00:58:17 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:17.397744 | orchestrator | 2026-01-07 00:58:17 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:58:17.397851 | orchestrator | 2026-01-07 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:20.445980 | orchestrator | 2026-01-07 00:58:20 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:20.447873 | orchestrator | 2026-01-07 00:58:20 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:20.449369 | orchestrator | 2026-01-07 00:58:20 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:58:20.449413 | orchestrator | 2026-01-07 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:23.501839 | orchestrator | 2026-01-07 00:58:23 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:23.505493 | orchestrator | 2026-01-07 00:58:23 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:23.506299 | orchestrator | 2026-01-07 00:58:23 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:58:23.506760 | orchestrator | 2026-01-07 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:26.554179 | orchestrator | 2026-01-07 00:58:26 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:26.555129 | orchestrator | 2026-01-07 00:58:26 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:26.556713 | orchestrator | 2026-01-07 00:58:26 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:58:26.556983 | orchestrator | 2026-01-07 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:29.609902 | orchestrator | 2026-01-07 00:58:29 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:29.611868 | orchestrator | 2026-01-07 00:58:29 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:29.613887 | orchestrator | 2026-01-07 00:58:29 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:58:29.613918 | orchestrator | 2026-01-07 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:32.664607 | orchestrator | 2026-01-07 00:58:32 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:32.666414 | orchestrator | 2026-01-07 00:58:32 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:32.668567 | orchestrator | 2026-01-07 00:58:32 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:58:32.668861 | orchestrator | 2026-01-07 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:35.720560 | orchestrator | 2026-01-07 00:58:35 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:35.721312 | orchestrator | 2026-01-07 00:58:35 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:35.723605 | orchestrator | 2026-01-07 00:58:35 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:58:35.723875 | orchestrator | 2026-01-07 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:38.768178 | orchestrator | 2026-01-07 00:58:38 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:38.768981 | orchestrator | 2026-01-07 00:58:38 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:38.769840 | orchestrator | 2026-01-07 00:58:38 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state STARTED 2026-01-07 00:58:38.769864 | orchestrator | 2026-01-07 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:41.818392 | orchestrator | 2026-01-07 00:58:41 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:41.821278 | orchestrator | 2026-01-07 00:58:41 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:41.827754 | orchestrator | 2026-01-07 00:58:41 | INFO  | Task 72f7456a-cba5-4839-b931-366ea4cb805e is in state SUCCESS 2026-01-07 00:58:41.828599 | orchestrator | 2026-01-07 00:58:41.830152 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-07 00:58:41.830190 | orchestrator | 2.16.14 2026-01-07 00:58:41.830197 | orchestrator | 2026-01-07 00:58:41.830204 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-07 00:58:41.830210 | orchestrator | 2026-01-07 00:58:41.830215 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-07 00:58:41.830220 | orchestrator | Wednesday 07 January 2026 00:47:38 +0000 (0:00:00.608) 0:00:00.608 ***** 2026-01-07 00:58:41.830226 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.830232 | orchestrator | 2026-01-07 00:58:41.830237 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-07 00:58:41.830263 | orchestrator | Wednesday 07 January 2026 00:47:39 +0000 (0:00:00.984) 0:00:01.593 ***** 2026-01-07 00:58:41.830268 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.830273 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.830277 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.830282 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.830286 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.830291 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.830295 | orchestrator | 2026-01-07 00:58:41.830300 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-07 00:58:41.830305 | orchestrator | Wednesday 07 January 2026 00:47:41 +0000 (0:00:01.544) 0:00:03.138 ***** 2026-01-07 00:58:41.830310 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.830314 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.830319 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.830323 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.830328 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.830332 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.830337 | orchestrator | 2026-01-07 00:58:41.830341 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-07 00:58:41.830346 | orchestrator | Wednesday 07 January 2026 00:47:42 +0000 (0:00:00.980) 0:00:04.118 ***** 2026-01-07 00:58:41.830351 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.830355 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.830360 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.830364 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.830368 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.830373 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.830377 | orchestrator | 2026-01-07 00:58:41.830382 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-07 00:58:41.830386 | orchestrator | Wednesday 07 January 2026 00:47:43 +0000 (0:00:00.929) 0:00:05.048 ***** 2026-01-07 00:58:41.830391 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.830395 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.830400 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.830404 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.830409 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.830413 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.830418 | orchestrator | 2026-01-07 00:58:41.830422 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-07 00:58:41.830427 | orchestrator | Wednesday 07 January 2026 00:47:43 +0000 (0:00:00.635) 0:00:05.684 ***** 2026-01-07 00:58:41.830431 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.830436 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.830441 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.830455 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.830460 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.830464 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.830469 | orchestrator | 2026-01-07 00:58:41.830473 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-07 00:58:41.830505 | orchestrator | Wednesday 07 January 2026 00:47:44 +0000 (0:00:00.641) 0:00:06.325 ***** 2026-01-07 00:58:41.830511 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.830516 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.830520 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.830525 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.830529 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.830534 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.830538 | orchestrator | 2026-01-07 00:58:41.830543 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-07 00:58:41.830548 | orchestrator | Wednesday 07 January 2026 00:47:45 +0000 (0:00:00.678) 0:00:07.004 ***** 2026-01-07 00:58:41.830552 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.830558 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.830562 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.830567 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.830571 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.830581 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.830585 | orchestrator | 2026-01-07 00:58:41.830590 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-07 00:58:41.830594 | orchestrator | Wednesday 07 January 2026 00:47:45 +0000 (0:00:00.627) 0:00:07.632 ***** 2026-01-07 00:58:41.830599 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.830603 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.830608 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.830612 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.830617 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.830621 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.830626 | orchestrator | 2026-01-07 00:58:41.830631 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-07 00:58:41.830635 | orchestrator | Wednesday 07 January 2026 00:47:46 +0000 (0:00:01.019) 0:00:08.651 ***** 2026-01-07 00:58:41.830640 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:58:41.830645 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:58:41.830649 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:58:41.830654 | orchestrator | 2026-01-07 00:58:41.830713 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-07 00:58:41.830719 | orchestrator | Wednesday 07 January 2026 00:47:47 +0000 (0:00:00.797) 0:00:09.449 ***** 2026-01-07 00:58:41.830723 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.830728 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.830732 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.830745 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.830750 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.830756 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.830761 | orchestrator | 2026-01-07 00:58:41.830766 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-07 00:58:41.830771 | orchestrator | Wednesday 07 January 2026 00:47:48 +0000 (0:00:01.144) 0:00:10.593 ***** 2026-01-07 00:58:41.830777 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:58:41.830782 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:58:41.830788 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:58:41.830793 | orchestrator | 2026-01-07 00:58:41.830798 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-07 00:58:41.830803 | orchestrator | Wednesday 07 January 2026 00:47:51 +0000 (0:00:03.032) 0:00:13.625 ***** 2026-01-07 00:58:41.830809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-07 00:58:41.830830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-07 00:58:41.830836 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-07 00:58:41.830842 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.830847 | orchestrator | 2026-01-07 00:58:41.830852 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-07 00:58:41.830858 | orchestrator | Wednesday 07 January 2026 00:47:52 +0000 (0:00:00.939) 0:00:14.565 ***** 2026-01-07 00:58:41.830865 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.830873 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.830879 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.830889 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.830895 | orchestrator | 2026-01-07 00:58:41.830900 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-07 00:58:41.830905 | orchestrator | Wednesday 07 January 2026 00:47:53 +0000 (0:00:00.740) 0:00:15.306 ***** 2026-01-07 00:58:41.830916 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.830925 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.830931 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.830936 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.830942 | orchestrator | 2026-01-07 00:58:41.830947 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-07 00:58:41.830953 | orchestrator | Wednesday 07 January 2026 00:47:53 +0000 (0:00:00.321) 0:00:15.627 ***** 2026-01-07 00:58:41.830965 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-07 00:47:49.547820', 'end': '2026-01-07 00:47:49.817087', 'delta': '0:00:00.269267', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.830973 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-07 00:47:50.553246', 'end': '2026-01-07 00:47:50.760007', 'delta': '0:00:00.206761', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.830979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-07 00:47:51.269107', 'end': '2026-01-07 00:47:51.603960', 'delta': '0:00:00.334853', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.830996 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.831002 | orchestrator | 2026-01-07 00:58:41.831007 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-07 00:58:41.831012 | orchestrator | Wednesday 07 January 2026 00:47:54 +0000 (0:00:00.421) 0:00:16.049 ***** 2026-01-07 00:58:41.831018 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.831023 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.831028 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.831034 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.831039 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.831044 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.831049 | orchestrator | 2026-01-07 00:58:41.831055 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-07 00:58:41.831060 | orchestrator | Wednesday 07 January 2026 00:47:56 +0000 (0:00:01.762) 0:00:17.812 ***** 2026-01-07 00:58:41.831068 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:58:41.831073 | orchestrator | 2026-01-07 00:58:41.831079 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-07 00:58:41.831084 | orchestrator | Wednesday 07 January 2026 00:47:56 +0000 (0:00:00.795) 0:00:18.607 ***** 2026-01-07 00:58:41.831089 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.831095 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.831100 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.831106 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.831112 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.831116 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.831121 | orchestrator | 2026-01-07 00:58:41.831125 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-07 00:58:41.831130 | orchestrator | Wednesday 07 January 2026 00:47:58 +0000 (0:00:01.875) 0:00:20.483 ***** 2026-01-07 00:58:41.831134 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.831139 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.831143 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.831148 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.831152 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.831157 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.831161 | orchestrator | 2026-01-07 00:58:41.831166 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-07 00:58:41.831170 | orchestrator | Wednesday 07 January 2026 00:48:00 +0000 (0:00:01.950) 0:00:22.434 ***** 2026-01-07 00:58:41.831175 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.831180 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.831184 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.831188 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.831193 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.831197 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.831202 | orchestrator | 2026-01-07 00:58:41.831206 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-07 00:58:41.831211 | orchestrator | Wednesday 07 January 2026 00:48:01 +0000 (0:00:01.088) 0:00:23.522 ***** 2026-01-07 00:58:41.831215 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.831220 | orchestrator | 2026-01-07 00:58:41.831224 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-07 00:58:41.831229 | orchestrator | Wednesday 07 January 2026 00:48:01 +0000 (0:00:00.270) 0:00:23.792 ***** 2026-01-07 00:58:41.831233 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.831238 | orchestrator | 2026-01-07 00:58:41.831243 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-07 00:58:41.831247 | orchestrator | Wednesday 07 January 2026 00:48:02 +0000 (0:00:00.288) 0:00:24.081 ***** 2026-01-07 00:58:41.831285 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.831290 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.831294 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.831302 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.831307 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.831311 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.831316 | orchestrator | 2026-01-07 00:58:41.831320 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-07 00:58:41.831325 | orchestrator | Wednesday 07 January 2026 00:48:02 +0000 (0:00:00.630) 0:00:24.711 ***** 2026-01-07 00:58:41.831329 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.831334 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.831338 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.831343 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.831347 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.831352 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.831356 | orchestrator | 2026-01-07 00:58:41.831361 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-07 00:58:41.831365 | orchestrator | Wednesday 07 January 2026 00:48:03 +0000 (0:00:00.886) 0:00:25.598 ***** 2026-01-07 00:58:41.831370 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.831374 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.831379 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.831383 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.831388 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.831392 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.831397 | orchestrator | 2026-01-07 00:58:41.831401 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-07 00:58:41.831406 | orchestrator | Wednesday 07 January 2026 00:48:04 +0000 (0:00:00.653) 0:00:26.251 ***** 2026-01-07 00:58:41.831410 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.831415 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.831419 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.831424 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.831428 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.831433 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.831437 | orchestrator | 2026-01-07 00:58:41.831442 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-07 00:58:41.831446 | orchestrator | Wednesday 07 January 2026 00:48:05 +0000 (0:00:00.778) 0:00:27.029 ***** 2026-01-07 00:58:41.831451 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.831455 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.831460 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.831464 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.831469 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.831473 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.831478 | orchestrator | 2026-01-07 00:58:41.831500 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-07 00:58:41.831505 | orchestrator | Wednesday 07 January 2026 00:48:05 +0000 (0:00:00.770) 0:00:27.800 ***** 2026-01-07 00:58:41.831510 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.831514 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.831518 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.831523 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.831527 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.831532 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.831536 | orchestrator | 2026-01-07 00:58:41.831541 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-07 00:58:41.831548 | orchestrator | Wednesday 07 January 2026 00:48:06 +0000 (0:00:00.809) 0:00:28.609 ***** 2026-01-07 00:58:41.831552 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.831557 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.831565 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.831570 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.831574 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.831578 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.831583 | orchestrator | 2026-01-07 00:58:41.831587 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-07 00:58:41.831592 | orchestrator | Wednesday 07 January 2026 00:48:07 +0000 (0:00:00.765) 0:00:29.375 ***** 2026-01-07 00:58:41.831598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--29ea93ed--0a9a--5585--8fd4--59056229f60b-osd--block--29ea93ed--0a9a--5585--8fd4--59056229f60b', 'dm-uuid-LVM-fp2IefjU1GVqX3ZEIBT9uOVgnwN2u1638mEXPxJGefbIh85IScxE4Rx3rSoyFizJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6ed406c7--6b31--5121--9e07--a95f5a11b8c1-osd--block--6ed406c7--6b31--5121--9e07--a95f5a11b8c1', 'dm-uuid-LVM-FuzWdYFQkSMsW1lHpsMvBoq52G22660tRBUdJeAv1WMjgd3YBxiSYi5ipcrZRTVx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part1', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part14', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part15', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part16', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.831672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b3967c5--6312--5066--b0c3--d93b1266106e-osd--block--0b3967c5--6312--5066--b0c3--d93b1266106e', 'dm-uuid-LVM-nSI3d8WfGayQZEMBqsvBSy4mN6nRtHcmWnDMWxoif45y5uGtb5FXrx1LpD8KATcd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--29ea93ed--0a9a--5585--8fd4--59056229f60b-osd--block--29ea93ed--0a9a--5585--8fd4--59056229f60b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7RLssA-pWm9-MKY0-4SYs-vEi3-vzNl-qbfdEs', 'scsi-0QEMU_QEMU_HARDDISK_0dd21d7e-182d-4e2a-b2dc-5d8af31fa2ef', 'scsi-SQEMU_QEMU_HARDDISK_0dd21d7e-182d-4e2a-b2dc-5d8af31fa2ef'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.831691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6ed406c7--6b31--5121--9e07--a95f5a11b8c1-osd--block--6ed406c7--6b31--5121--9e07--a95f5a11b8c1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qoKY51-9C1a-6dz1-AENo-Jd2i-fccj-GewRhx', 'scsi-0QEMU_QEMU_HARDDISK_c52f0d9f-ed72-456f-8893-789cce9c22ff', 'scsi-SQEMU_QEMU_HARDDISK_c52f0d9f-ed72-456f-8893-789cce9c22ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.831696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f1de19d5--0a66--5bfe--890b--5e52c2bc57c1-osd--block--f1de19d5--0a66--5bfe--890b--5e52c2bc57c1', 'dm-uuid-LVM-Y7b4CkJzC6vDcNruRshPVijP2keHnjiYstnGQydZUpEeayicl26bW3ZIFILeWhPf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17558d9b-0f92-44fa-9888-3d1d3136e2b9', 'scsi-SQEMU_QEMU_HARDDISK_17558d9b-0f92-44fa-9888-3d1d3136e2b9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.831710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.831715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831719 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.831746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dee3f89e--6ecc--57ac--a128--7ff5a8885640-osd--block--dee3f89e--6ecc--57ac--a128--7ff5a8885640', 'dm-uuid-LVM-cp2mqFBfalJC3YyLofJNvbodHGGMGsPSNbpYNT19FlJ1MqcMaxxX2jWsXD76Bjlm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831834 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1079410--ca98--5ed2--be64--415d52b0d3f8-osd--block--c1079410--ca98--5ed2--be64--415d52b0d3f8', 'dm-uuid-LVM-ug8RvgmxgyB3TsUc6mDRhMl1zkcvTznbCpJ2n4ksy63KYghCeyECOLu5JTAtfGL8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.831866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part1', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part14', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part15', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part16', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.831873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.832557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0b3967c5--6312--5066--b0c3--d93b1266106e-osd--block--0b3967c5--6312--5066--b0c3--d93b1266106e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HTC6J9-bFdj-gkdE-m0y6-kSm3-YMFi-s5ibVj', 'scsi-0QEMU_QEMU_HARDDISK_e8953730-7f10-4622-86b0-9bd54769baab', 'scsi-SQEMU_QEMU_HARDDISK_e8953730-7f10-4622-86b0-9bd54769baab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.833161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f1de19d5--0a66--5bfe--890b--5e52c2bc57c1-osd--block--f1de19d5--0a66--5bfe--890b--5e52c2bc57c1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-j2NvUc-fuSK-tsLs-Ivbq-z3fR-TUD9-TbpDys', 'scsi-0QEMU_QEMU_HARDDISK_2778d154-06c9-4d37-b4c8-396dcdd5fdf1', 'scsi-SQEMU_QEMU_HARDDISK_2778d154-06c9-4d37-b4c8-396dcdd5fdf1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.833292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f78b2b96-168b-421a-aa15-4bebe7f5a151', 'scsi-SQEMU_QEMU_HARDDISK_f78b2b96-168b-421a-aa15-4bebe7f5a151'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.833304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.833322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part1', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part14', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part15', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part16', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.833393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--dee3f89e--6ecc--57ac--a128--7ff5a8885640-osd--block--dee3f89e--6ecc--57ac--a128--7ff5a8885640'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BCOpVX-WuWV-18wc-ofv4-NGYr-JZ0a-lNLkxw', 'scsi-0QEMU_QEMU_HARDDISK_6d387afb-e7b9-4a62-89e6-97c0cffa548c', 'scsi-SQEMU_QEMU_HARDDISK_6d387afb-e7b9-4a62-89e6-97c0cffa548c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.833398 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.833404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c1079410--ca98--5ed2--be64--415d52b0d3f8-osd--block--c1079410--ca98--5ed2--be64--415d52b0d3f8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kRcGXZ-yQY8-AL7O-ugfG-2RUM-V8JJ-gPzG4q', 'scsi-0QEMU_QEMU_HARDDISK_995dcd08-654d-4bc0-ab24-70981ba073f5', 'scsi-SQEMU_QEMU_HARDDISK_995dcd08-654d-4bc0-ab24-70981ba073f5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.833412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82b3532f-8ed6-4997-a6d4-62047998b4b8', 'scsi-SQEMU_QEMU_HARDDISK_82b3532f-8ed6-4997-a6d4-62047998b4b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.833420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.833425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3', 'scsi-SQEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3-part1', 'scsi-SQEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3-part14', 'scsi-SQEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3-part15', 'scsi-SQEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3-part16', 'scsi-SQEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.833587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.833595 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.833600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f', 'scsi-SQEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f-part1', 'scsi-SQEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f-part14', 'scsi-SQEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f-part15', 'scsi-SQEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f-part16', 'scsi-SQEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.833664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.833669 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.833674 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.833679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 00:58:41.833728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1', 'scsi-SQEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1-part1', 'scsi-SQEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1-part14', 'scsi-SQEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1-part15', 'scsi-SQEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1-part16', 'scsi-SQEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.833738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 00:58:41.833746 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.833751 | orchestrator | 2026-01-07 00:58:41.833756 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-07 00:58:41.833761 | orchestrator | Wednesday 07 January 2026 00:48:08 +0000 (0:00:01.393) 0:00:30.769 ***** 2026-01-07 00:58:41.833767 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--29ea93ed--0a9a--5585--8fd4--59056229f60b-osd--block--29ea93ed--0a9a--5585--8fd4--59056229f60b', 'dm-uuid-LVM-fp2IefjU1GVqX3ZEIBT9uOVgnwN2u1638mEXPxJGefbIh85IScxE4Rx3rSoyFizJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833773 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6ed406c7--6b31--5121--9e07--a95f5a11b8c1-osd--block--6ed406c7--6b31--5121--9e07--a95f5a11b8c1', 'dm-uuid-LVM-FuzWdYFQkSMsW1lHpsMvBoq52G22660tRBUdJeAv1WMjgd3YBxiSYi5ipcrZRTVx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833780 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833787 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833792 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833802 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833809 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b3967c5--6312--5066--b0c3--d93b1266106e-osd--block--0b3967c5--6312--5066--b0c3--d93b1266106e', 'dm-uuid-LVM-nSI3d8WfGayQZEMBqsvBSy4mN6nRtHcmWnDMWxoif45y5uGtb5FXrx1LpD8KATcd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833814 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833824 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833835 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f1de19d5--0a66--5bfe--890b--5e52c2bc57c1-osd--block--f1de19d5--0a66--5bfe--890b--5e52c2bc57c1', 'dm-uuid-LVM-Y7b4CkJzC6vDcNruRshPVijP2keHnjiYstnGQydZUpEeayicl26bW3ZIFILeWhPf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833848 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833853 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833859 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833868 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part1', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part14', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part15', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part16', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833881 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833888 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--29ea93ed--0a9a--5585--8fd4--59056229f60b-osd--block--29ea93ed--0a9a--5585--8fd4--59056229f60b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7RLssA-pWm9-MKY0-4SYs-vEi3-vzNl-qbfdEs', 'scsi-0QEMU_QEMU_HARDDISK_0dd21d7e-182d-4e2a-b2dc-5d8af31fa2ef', 'scsi-SQEMU_QEMU_HARDDISK_0dd21d7e-182d-4e2a-b2dc-5d8af31fa2ef'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833894 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833902 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6ed406c7--6b31--5121--9e07--a95f5a11b8c1-osd--block--6ed406c7--6b31--5121--9e07--a95f5a11b8c1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qoKY51-9C1a-6dz1-AENo-Jd2i-fccj-GewRhx', 'scsi-0QEMU_QEMU_HARDDISK_c52f0d9f-ed72-456f-8893-789cce9c22ff', 'scsi-SQEMU_QEMU_HARDDISK_c52f0d9f-ed72-456f-8893-789cce9c22ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833908 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dee3f89e--6ecc--57ac--a128--7ff5a8885640-osd--block--dee3f89e--6ecc--57ac--a128--7ff5a8885640', 'dm-uuid-LVM-cp2mqFBfalJC3YyLofJNvbodHGGMGsPSNbpYNT19FlJ1MqcMaxxX2jWsXD76Bjlm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833922 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833928 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17558d9b-0f92-44fa-9888-3d1d3136e2b9', 'scsi-SQEMU_QEMU_HARDDISK_17558d9b-0f92-44fa-9888-3d1d3136e2b9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833934 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833939 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833948 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1079410--ca98--5ed2--be64--415d52b0d3f8-osd--block--c1079410--ca98--5ed2--be64--415d52b0d3f8', 'dm-uuid-LVM-ug8RvgmxgyB3TsUc6mDRhMl1zkcvTznbCpJ2n4ksy63KYghCeyECOLu5JTAtfGL8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833958 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833966 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833972 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833978 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833984 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833991 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.833997 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834006 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834044 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834052 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834057 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834066 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834072 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834086 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part1', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part14', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part15', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part16', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834093 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834101 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0b3967c5--6312--5066--b0c3--d93b1266106e-osd--block--0b3967c5--6312--5066--b0c3--d93b1266106e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HTC6J9-bFdj-gkdE-m0y6-kSm3-YMFi-s5ibVj', 'scsi-0QEMU_QEMU_HARDDISK_e8953730-7f10-4622-86b0-9bd54769baab', 'scsi-SQEMU_QEMU_HARDDISK_e8953730-7f10-4622-86b0-9bd54769baab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834110 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834120 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3', 'scsi-SQEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3-part1', 'scsi-SQEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3-part14', 'scsi-SQEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3-part15', 'scsi-SQEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3-part16', 'scsi-SQEMU_QEMU_HARDDISK_72a16e10-25cf-4871-b1e8-6630ea9868f3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834130 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f1de19d5--0a66--5bfe--890b--5e52c2bc57c1-osd--block--f1de19d5--0a66--5bfe--890b--5e52c2bc57c1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-j2NvUc-fuSK-tsLs-Ivbq-z3fR-TUD9-TbpDys', 'scsi-0QEMU_QEMU_HARDDISK_2778d154-06c9-4d37-b4c8-396dcdd5fdf1', 'scsi-SQEMU_QEMU_HARDDISK_2778d154-06c9-4d37-b4c8-396dcdd5fdf1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834141 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834146 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834155 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834160 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834165 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f78b2b96-168b-421a-aa15-4bebe7f5a151', 'scsi-SQEMU_QEMU_HARDDISK_f78b2b96-168b-421a-aa15-4bebe7f5a151'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834173 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834178 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834186 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834239 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834248 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part1', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part14', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part15', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part16', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834257 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834262 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834270 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--dee3f89e--6ecc--57ac--a128--7ff5a8885640-osd--block--dee3f89e--6ecc--57ac--a128--7ff5a8885640'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BCOpVX-WuWV-18wc-ofv4-NGYr-JZ0a-lNLkxw', 'scsi-0QEMU_QEMU_HARDDISK_6d387afb-e7b9-4a62-89e6-97c0cffa548c', 'scsi-SQEMU_QEMU_HARDDISK_6d387afb-e7b9-4a62-89e6-97c0cffa548c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834275 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834280 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834288 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c1079410--ca98--5ed2--be64--415d52b0d3f8-osd--block--c1079410--ca98--5ed2--be64--415d52b0d3f8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kRcGXZ-yQY8-AL7O-ugfG-2RUM-V8JJ-gPzG4q', 'scsi-0QEMU_QEMU_HARDDISK_995dcd08-654d-4bc0-ab24-70981ba073f5', 'scsi-SQEMU_QEMU_HARDDISK_995dcd08-654d-4bc0-ab24-70981ba073f5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834297 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.834306 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f', 'scsi-SQEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f-part1', 'scsi-SQEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f-part14', 'scsi-SQEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f-part15', 'scsi-SQEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f-part16', 'scsi-SQEMU_QEMU_HARDDISK_ff82c9e8-95eb-4674-9070-fbf445caa94f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834311 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.834316 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82b3532f-8ed6-4997-a6d4-62047998b4b8', 'scsi-SQEMU_QEMU_HARDDISK_82b3532f-8ed6-4997-a6d4-62047998b4b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834323 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834332 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.834337 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834342 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.834346 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834354 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834359 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834364 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834375 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.834382 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834387 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834392 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834400 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834405 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834410 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1', 'scsi-SQEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1-part1', 'scsi-SQEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1-part14', 'scsi-SQEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1-part15', 'scsi-SQEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1-part16', 'scsi-SQEMU_QEMU_HARDDISK_57a83973-da93-4483-9b1b-3a04918c6db1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834420 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 00:58:41.834425 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.834429 | orchestrator | 2026-01-07 00:58:41.834437 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-07 00:58:41.834548 | orchestrator | Wednesday 07 January 2026 00:48:10 +0000 (0:00:01.072) 0:00:31.842 ***** 2026-01-07 00:58:41.834571 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.834580 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.834587 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.834594 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.834602 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.834610 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.834617 | orchestrator | 2026-01-07 00:58:41.834625 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-07 00:58:41.834633 | orchestrator | Wednesday 07 January 2026 00:48:11 +0000 (0:00:01.275) 0:00:33.117 ***** 2026-01-07 00:58:41.834640 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.834649 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.834655 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.834663 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.834670 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.834677 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.834685 | orchestrator | 2026-01-07 00:58:41.834692 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-07 00:58:41.834700 | orchestrator | Wednesday 07 January 2026 00:48:12 +0000 (0:00:00.912) 0:00:34.030 ***** 2026-01-07 00:58:41.834715 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.834723 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.834731 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.834739 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.834747 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.834756 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.834764 | orchestrator | 2026-01-07 00:58:41.834772 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-07 00:58:41.834777 | orchestrator | Wednesday 07 January 2026 00:48:13 +0000 (0:00:01.429) 0:00:35.460 ***** 2026-01-07 00:58:41.834781 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.834786 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.834791 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.834795 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.834802 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.834811 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.834823 | orchestrator | 2026-01-07 00:58:41.834830 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-07 00:58:41.834838 | orchestrator | Wednesday 07 January 2026 00:48:14 +0000 (0:00:00.628) 0:00:36.089 ***** 2026-01-07 00:58:41.834845 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.834852 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.834859 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.834865 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.834872 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.834879 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.834887 | orchestrator | 2026-01-07 00:58:41.834895 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-07 00:58:41.834902 | orchestrator | Wednesday 07 January 2026 00:48:14 +0000 (0:00:00.697) 0:00:36.786 ***** 2026-01-07 00:58:41.834910 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.834917 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.834925 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.834930 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.834934 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.834944 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.834948 | orchestrator | 2026-01-07 00:58:41.834953 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-07 00:58:41.834957 | orchestrator | Wednesday 07 January 2026 00:48:15 +0000 (0:00:00.740) 0:00:37.526 ***** 2026-01-07 00:58:41.834962 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-07 00:58:41.834967 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-07 00:58:41.834971 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-07 00:58:41.834976 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-07 00:58:41.834980 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-07 00:58:41.834985 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-07 00:58:41.834990 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-07 00:58:41.834994 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-07 00:58:41.834999 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-07 00:58:41.835003 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-07 00:58:41.835008 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-07 00:58:41.835012 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-07 00:58:41.835017 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-07 00:58:41.835021 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-07 00:58:41.835026 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-07 00:58:41.835030 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-07 00:58:41.835034 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-07 00:58:41.835045 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-07 00:58:41.835049 | orchestrator | 2026-01-07 00:58:41.835054 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-07 00:58:41.835058 | orchestrator | Wednesday 07 January 2026 00:48:19 +0000 (0:00:03.789) 0:00:41.315 ***** 2026-01-07 00:58:41.835063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-07 00:58:41.835068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-07 00:58:41.835072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-07 00:58:41.835077 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.835081 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-07 00:58:41.835085 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-07 00:58:41.835090 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-07 00:58:41.835094 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.835099 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-07 00:58:41.835119 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-07 00:58:41.835123 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-07 00:58:41.835128 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.835132 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:58:41.835137 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:58:41.835142 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:58:41.835146 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.835151 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-07 00:58:41.835166 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-07 00:58:41.835171 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-07 00:58:41.835176 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.835186 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-07 00:58:41.835191 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-07 00:58:41.835196 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-07 00:58:41.835200 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.835205 | orchestrator | 2026-01-07 00:58:41.835209 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-07 00:58:41.835214 | orchestrator | Wednesday 07 January 2026 00:48:20 +0000 (0:00:00.968) 0:00:42.284 ***** 2026-01-07 00:58:41.835219 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.835223 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.835228 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.835233 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.835238 | orchestrator | 2026-01-07 00:58:41.835242 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-07 00:58:41.835248 | orchestrator | Wednesday 07 January 2026 00:48:21 +0000 (0:00:01.390) 0:00:43.675 ***** 2026-01-07 00:58:41.835253 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.835258 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.835262 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.835267 | orchestrator | 2026-01-07 00:58:41.835271 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-07 00:58:41.835276 | orchestrator | Wednesday 07 January 2026 00:48:22 +0000 (0:00:00.491) 0:00:44.166 ***** 2026-01-07 00:58:41.835280 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.835285 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.835290 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.835294 | orchestrator | 2026-01-07 00:58:41.835299 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-07 00:58:41.835308 | orchestrator | Wednesday 07 January 2026 00:48:22 +0000 (0:00:00.286) 0:00:44.452 ***** 2026-01-07 00:58:41.835312 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.835317 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.835321 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.835326 | orchestrator | 2026-01-07 00:58:41.835333 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-07 00:58:41.835338 | orchestrator | Wednesday 07 January 2026 00:48:23 +0000 (0:00:00.607) 0:00:45.060 ***** 2026-01-07 00:58:41.835343 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.835347 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.835352 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.835356 | orchestrator | 2026-01-07 00:58:41.835361 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-07 00:58:41.835365 | orchestrator | Wednesday 07 January 2026 00:48:23 +0000 (0:00:00.695) 0:00:45.755 ***** 2026-01-07 00:58:41.835370 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:58:41.835374 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:58:41.835379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:58:41.835384 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.835388 | orchestrator | 2026-01-07 00:58:41.835393 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-07 00:58:41.835397 | orchestrator | Wednesday 07 January 2026 00:48:24 +0000 (0:00:00.486) 0:00:46.242 ***** 2026-01-07 00:58:41.835402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:58:41.835406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:58:41.835410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:58:41.835415 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.835419 | orchestrator | 2026-01-07 00:58:41.835424 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-07 00:58:41.835430 | orchestrator | Wednesday 07 January 2026 00:48:25 +0000 (0:00:00.771) 0:00:47.013 ***** 2026-01-07 00:58:41.835437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:58:41.835444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:58:41.835456 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:58:41.835466 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.835474 | orchestrator | 2026-01-07 00:58:41.835505 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-07 00:58:41.835514 | orchestrator | Wednesday 07 January 2026 00:48:25 +0000 (0:00:00.375) 0:00:47.389 ***** 2026-01-07 00:58:41.835521 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.835528 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.835536 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.835543 | orchestrator | 2026-01-07 00:58:41.835551 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-07 00:58:41.835559 | orchestrator | Wednesday 07 January 2026 00:48:25 +0000 (0:00:00.293) 0:00:47.682 ***** 2026-01-07 00:58:41.835568 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-07 00:58:41.835576 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-07 00:58:41.835592 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-07 00:58:41.835600 | orchestrator | 2026-01-07 00:58:41.835607 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-07 00:58:41.835616 | orchestrator | Wednesday 07 January 2026 00:48:27 +0000 (0:00:01.251) 0:00:48.933 ***** 2026-01-07 00:58:41.835621 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:58:41.835628 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:58:41.835635 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:58:41.835646 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-07 00:58:41.835651 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-07 00:58:41.835656 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-07 00:58:41.835660 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-07 00:58:41.835665 | orchestrator | 2026-01-07 00:58:41.835669 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-07 00:58:41.835674 | orchestrator | Wednesday 07 January 2026 00:48:28 +0000 (0:00:01.289) 0:00:50.222 ***** 2026-01-07 00:58:41.835678 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:58:41.835683 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:58:41.835687 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:58:41.835692 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-07 00:58:41.835697 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-07 00:58:41.835701 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-07 00:58:41.835706 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-07 00:58:41.835710 | orchestrator | 2026-01-07 00:58:41.835715 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:58:41.835719 | orchestrator | Wednesday 07 January 2026 00:48:30 +0000 (0:00:01.816) 0:00:52.039 ***** 2026-01-07 00:58:41.835724 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.835731 | orchestrator | 2026-01-07 00:58:41.835735 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:58:41.835740 | orchestrator | Wednesday 07 January 2026 00:48:31 +0000 (0:00:01.114) 0:00:53.153 ***** 2026-01-07 00:58:41.835750 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.835757 | orchestrator | 2026-01-07 00:58:41.835762 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:58:41.835766 | orchestrator | Wednesday 07 January 2026 00:48:32 +0000 (0:00:01.273) 0:00:54.427 ***** 2026-01-07 00:58:41.835771 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.835776 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.835780 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.835785 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.835789 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.835794 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.835798 | orchestrator | 2026-01-07 00:58:41.835803 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:58:41.835807 | orchestrator | Wednesday 07 January 2026 00:48:34 +0000 (0:00:01.438) 0:00:55.865 ***** 2026-01-07 00:58:41.835812 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.835816 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.835821 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.835826 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.835830 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.835835 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.835839 | orchestrator | 2026-01-07 00:58:41.835844 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:58:41.835848 | orchestrator | Wednesday 07 January 2026 00:48:35 +0000 (0:00:01.361) 0:00:57.226 ***** 2026-01-07 00:58:41.835853 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.835857 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.835862 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.835870 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.835874 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.835879 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.835883 | orchestrator | 2026-01-07 00:58:41.835888 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:58:41.835893 | orchestrator | Wednesday 07 January 2026 00:48:37 +0000 (0:00:01.619) 0:00:58.846 ***** 2026-01-07 00:58:41.835897 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.835902 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.835906 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.835911 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.835915 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.835920 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.835924 | orchestrator | 2026-01-07 00:58:41.835929 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:58:41.835933 | orchestrator | Wednesday 07 January 2026 00:48:38 +0000 (0:00:01.015) 0:00:59.861 ***** 2026-01-07 00:58:41.835938 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.835942 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.835947 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.835951 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.835956 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.835964 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.835968 | orchestrator | 2026-01-07 00:58:41.835973 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:58:41.835978 | orchestrator | Wednesday 07 January 2026 00:48:40 +0000 (0:00:02.218) 0:01:02.080 ***** 2026-01-07 00:58:41.835982 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.835987 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.835991 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.835996 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.836000 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.836005 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.836009 | orchestrator | 2026-01-07 00:58:41.836014 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:58:41.836018 | orchestrator | Wednesday 07 January 2026 00:48:41 +0000 (0:00:00.997) 0:01:03.077 ***** 2026-01-07 00:58:41.836023 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.836027 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.836032 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.836036 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.836040 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.836045 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.836049 | orchestrator | 2026-01-07 00:58:41.836054 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:58:41.836058 | orchestrator | Wednesday 07 January 2026 00:48:43 +0000 (0:00:02.317) 0:01:05.395 ***** 2026-01-07 00:58:41.836063 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.836067 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.836072 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.836076 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.836081 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.836085 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.836090 | orchestrator | 2026-01-07 00:58:41.836094 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:58:41.836099 | orchestrator | Wednesday 07 January 2026 00:48:45 +0000 (0:00:01.432) 0:01:06.827 ***** 2026-01-07 00:58:41.836103 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.836108 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.836112 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.836117 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.836121 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.836126 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.836130 | orchestrator | 2026-01-07 00:58:41.836135 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:58:41.836143 | orchestrator | Wednesday 07 January 2026 00:48:46 +0000 (0:00:01.656) 0:01:08.484 ***** 2026-01-07 00:58:41.836147 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.836152 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.836156 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.836161 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.836166 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.836170 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.836175 | orchestrator | 2026-01-07 00:58:41.836180 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:58:41.836184 | orchestrator | Wednesday 07 January 2026 00:48:47 +0000 (0:00:00.773) 0:01:09.257 ***** 2026-01-07 00:58:41.836189 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.836194 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.836198 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.836206 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.836211 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.836215 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.836220 | orchestrator | 2026-01-07 00:58:41.836224 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:58:41.836229 | orchestrator | Wednesday 07 January 2026 00:48:48 +0000 (0:00:01.297) 0:01:10.555 ***** 2026-01-07 00:58:41.836233 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.836238 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.836242 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.836247 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.836251 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.836256 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.836260 | orchestrator | 2026-01-07 00:58:41.836265 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:58:41.836270 | orchestrator | Wednesday 07 January 2026 00:48:50 +0000 (0:00:01.278) 0:01:11.833 ***** 2026-01-07 00:58:41.836274 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.836279 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.836283 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.836288 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.836293 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.836297 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.836302 | orchestrator | 2026-01-07 00:58:41.836306 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:58:41.836311 | orchestrator | Wednesday 07 January 2026 00:48:50 +0000 (0:00:00.685) 0:01:12.519 ***** 2026-01-07 00:58:41.836316 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.836321 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.836325 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.836330 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.836334 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.836339 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.836343 | orchestrator | 2026-01-07 00:58:41.836348 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:58:41.836352 | orchestrator | Wednesday 07 January 2026 00:48:51 +0000 (0:00:00.656) 0:01:13.176 ***** 2026-01-07 00:58:41.836357 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.836362 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.836366 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.836371 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.836375 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.836380 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.836385 | orchestrator | 2026-01-07 00:58:41.836389 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:58:41.836394 | orchestrator | Wednesday 07 January 2026 00:48:52 +0000 (0:00:00.740) 0:01:13.916 ***** 2026-01-07 00:58:41.836398 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.836407 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.836411 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.836416 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.836423 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.836428 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.836433 | orchestrator | 2026-01-07 00:58:41.836441 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:58:41.836447 | orchestrator | Wednesday 07 January 2026 00:48:52 +0000 (0:00:00.652) 0:01:14.568 ***** 2026-01-07 00:58:41.836452 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.836457 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.836461 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.836466 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.836470 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.836475 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.836492 | orchestrator | 2026-01-07 00:58:41.836498 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:58:41.836502 | orchestrator | Wednesday 07 January 2026 00:48:53 +0000 (0:00:00.746) 0:01:15.315 ***** 2026-01-07 00:58:41.836507 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.836512 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.836516 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.836520 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.836525 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.836529 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.836534 | orchestrator | 2026-01-07 00:58:41.836538 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:58:41.836543 | orchestrator | Wednesday 07 January 2026 00:48:54 +0000 (0:00:00.643) 0:01:15.959 ***** 2026-01-07 00:58:41.836547 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.836552 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.836557 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.836561 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.836566 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.836570 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.836574 | orchestrator | 2026-01-07 00:58:41.836579 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-07 00:58:41.836583 | orchestrator | Wednesday 07 January 2026 00:48:55 +0000 (0:00:01.314) 0:01:17.273 ***** 2026-01-07 00:58:41.836588 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.836592 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.836597 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.836602 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.836606 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.836610 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.836615 | orchestrator | 2026-01-07 00:58:41.836619 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-07 00:58:41.836624 | orchestrator | Wednesday 07 January 2026 00:48:56 +0000 (0:00:01.412) 0:01:18.685 ***** 2026-01-07 00:58:41.836628 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.836633 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.836637 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.836642 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.836646 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.836651 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.836655 | orchestrator | 2026-01-07 00:58:41.836660 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-07 00:58:41.836664 | orchestrator | Wednesday 07 January 2026 00:48:59 +0000 (0:00:02.498) 0:01:21.183 ***** 2026-01-07 00:58:41.836672 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.836676 | orchestrator | 2026-01-07 00:58:41.836681 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-07 00:58:41.836689 | orchestrator | Wednesday 07 January 2026 00:49:00 +0000 (0:00:01.118) 0:01:22.301 ***** 2026-01-07 00:58:41.836694 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.836698 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.836703 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.836707 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.836712 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.836716 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.836721 | orchestrator | 2026-01-07 00:58:41.836725 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-07 00:58:41.836730 | orchestrator | Wednesday 07 January 2026 00:49:01 +0000 (0:00:00.515) 0:01:22.817 ***** 2026-01-07 00:58:41.836734 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.836739 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.836743 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.836748 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.836752 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.836756 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.836761 | orchestrator | 2026-01-07 00:58:41.836765 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-07 00:58:41.836770 | orchestrator | Wednesday 07 January 2026 00:49:01 +0000 (0:00:00.653) 0:01:23.470 ***** 2026-01-07 00:58:41.836774 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:58:41.836779 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:58:41.836783 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:58:41.836788 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:58:41.836792 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:58:41.836797 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-07 00:58:41.836801 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:58:41.836806 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:58:41.836811 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:58:41.836819 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:58:41.836830 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:58:41.836838 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-07 00:58:41.836849 | orchestrator | 2026-01-07 00:58:41.836859 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-07 00:58:41.836867 | orchestrator | Wednesday 07 January 2026 00:49:03 +0000 (0:00:01.357) 0:01:24.828 ***** 2026-01-07 00:58:41.836874 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.836881 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.836889 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.836897 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.836905 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.836913 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.836921 | orchestrator | 2026-01-07 00:58:41.836929 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-07 00:58:41.836937 | orchestrator | Wednesday 07 January 2026 00:49:04 +0000 (0:00:01.459) 0:01:26.287 ***** 2026-01-07 00:58:41.836946 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.836954 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.836961 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.836970 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.836978 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.836986 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.837005 | orchestrator | 2026-01-07 00:58:41.837013 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-07 00:58:41.837021 | orchestrator | Wednesday 07 January 2026 00:49:05 +0000 (0:00:00.585) 0:01:26.872 ***** 2026-01-07 00:58:41.837029 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.837037 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.837045 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.837052 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.837060 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.837067 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.837074 | orchestrator | 2026-01-07 00:58:41.837082 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-07 00:58:41.837089 | orchestrator | Wednesday 07 January 2026 00:49:05 +0000 (0:00:00.903) 0:01:27.775 ***** 2026-01-07 00:58:41.837097 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.837105 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.837113 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.837121 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.837129 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.837138 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.837147 | orchestrator | 2026-01-07 00:58:41.837155 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-07 00:58:41.837164 | orchestrator | Wednesday 07 January 2026 00:49:06 +0000 (0:00:00.557) 0:01:28.333 ***** 2026-01-07 00:58:41.837172 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.837181 | orchestrator | 2026-01-07 00:58:41.837189 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-07 00:58:41.837202 | orchestrator | Wednesday 07 January 2026 00:49:07 +0000 (0:00:01.331) 0:01:29.664 ***** 2026-01-07 00:58:41.837211 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.837220 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.837228 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.837236 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.837245 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.837254 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.837262 | orchestrator | 2026-01-07 00:58:41.837271 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-07 00:58:41.837280 | orchestrator | Wednesday 07 January 2026 00:50:01 +0000 (0:00:53.330) 0:02:22.994 ***** 2026-01-07 00:58:41.837288 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:58:41.837296 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:58:41.837304 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:58:41.837313 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.837321 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:58:41.837329 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:58:41.837337 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:58:41.837345 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.837353 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:58:41.837361 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:58:41.837369 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:58:41.837378 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.837385 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:58:41.837394 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:58:41.837410 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:58:41.837419 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.837428 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:58:41.837437 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:58:41.837445 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:58:41.837453 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.837469 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-07 00:58:41.837477 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-07 00:58:41.837639 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-07 00:58:41.837646 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.837651 | orchestrator | 2026-01-07 00:58:41.837655 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-07 00:58:41.837660 | orchestrator | Wednesday 07 January 2026 00:50:01 +0000 (0:00:00.758) 0:02:23.752 ***** 2026-01-07 00:58:41.837665 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.837669 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.837674 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.837678 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.837683 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.837688 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.837692 | orchestrator | 2026-01-07 00:58:41.837697 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-07 00:58:41.837701 | orchestrator | Wednesday 07 January 2026 00:50:02 +0000 (0:00:00.921) 0:02:24.674 ***** 2026-01-07 00:58:41.837706 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.837710 | orchestrator | 2026-01-07 00:58:41.837715 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-07 00:58:41.837719 | orchestrator | Wednesday 07 January 2026 00:50:03 +0000 (0:00:00.193) 0:02:24.868 ***** 2026-01-07 00:58:41.837724 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.837729 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.837733 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.837738 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.837742 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.837746 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.837751 | orchestrator | 2026-01-07 00:58:41.837756 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-07 00:58:41.837760 | orchestrator | Wednesday 07 January 2026 00:50:03 +0000 (0:00:00.765) 0:02:25.633 ***** 2026-01-07 00:58:41.837765 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.837769 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.837774 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.837778 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.837783 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.837787 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.837792 | orchestrator | 2026-01-07 00:58:41.837796 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-07 00:58:41.837801 | orchestrator | Wednesday 07 January 2026 00:50:04 +0000 (0:00:00.986) 0:02:26.620 ***** 2026-01-07 00:58:41.837806 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.837810 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.837814 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.837819 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.837823 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.837828 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.837833 | orchestrator | 2026-01-07 00:58:41.837837 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-07 00:58:41.837842 | orchestrator | Wednesday 07 January 2026 00:50:05 +0000 (0:00:00.709) 0:02:27.329 ***** 2026-01-07 00:58:41.837856 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.837866 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.837871 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.837876 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.837880 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.837884 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.837889 | orchestrator | 2026-01-07 00:58:41.837893 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-07 00:58:41.837898 | orchestrator | Wednesday 07 January 2026 00:50:08 +0000 (0:00:02.742) 0:02:30.071 ***** 2026-01-07 00:58:41.837903 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.837907 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.837912 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.837916 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.837921 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.837925 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.837930 | orchestrator | 2026-01-07 00:58:41.837934 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-07 00:58:41.837939 | orchestrator | Wednesday 07 January 2026 00:50:08 +0000 (0:00:00.491) 0:02:30.562 ***** 2026-01-07 00:58:41.837944 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.837950 | orchestrator | 2026-01-07 00:58:41.837954 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-07 00:58:41.837959 | orchestrator | Wednesday 07 January 2026 00:50:09 +0000 (0:00:00.894) 0:02:31.456 ***** 2026-01-07 00:58:41.837964 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.837968 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.837973 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.837978 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.837982 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.837986 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.837991 | orchestrator | 2026-01-07 00:58:41.837996 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-07 00:58:41.838000 | orchestrator | Wednesday 07 January 2026 00:50:10 +0000 (0:00:00.734) 0:02:32.191 ***** 2026-01-07 00:58:41.838005 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.838009 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.838055 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.838060 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.838065 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.838070 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.838074 | orchestrator | 2026-01-07 00:58:41.838079 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-07 00:58:41.838084 | orchestrator | Wednesday 07 January 2026 00:50:10 +0000 (0:00:00.547) 0:02:32.738 ***** 2026-01-07 00:58:41.838089 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.838093 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.838107 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.838113 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.838118 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.838122 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.838127 | orchestrator | 2026-01-07 00:58:41.838132 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-07 00:58:41.838136 | orchestrator | Wednesday 07 January 2026 00:50:11 +0000 (0:00:00.717) 0:02:33.456 ***** 2026-01-07 00:58:41.838141 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.838146 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.838150 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.838155 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.838175 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.838180 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.838190 | orchestrator | 2026-01-07 00:58:41.838195 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-07 00:58:41.838200 | orchestrator | Wednesday 07 January 2026 00:50:12 +0000 (0:00:00.535) 0:02:33.991 ***** 2026-01-07 00:58:41.838205 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.838209 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.838214 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.838219 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.838224 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.838228 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.838233 | orchestrator | 2026-01-07 00:58:41.838238 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-07 00:58:41.838242 | orchestrator | Wednesday 07 January 2026 00:50:12 +0000 (0:00:00.698) 0:02:34.689 ***** 2026-01-07 00:58:41.838247 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.838252 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.838256 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.838261 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.838265 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.838270 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.838274 | orchestrator | 2026-01-07 00:58:41.838279 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-07 00:58:41.838284 | orchestrator | Wednesday 07 January 2026 00:50:13 +0000 (0:00:00.532) 0:02:35.222 ***** 2026-01-07 00:58:41.838288 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.838293 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.838297 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.838302 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.838306 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.838311 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.838316 | orchestrator | 2026-01-07 00:58:41.838320 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-07 00:58:41.838325 | orchestrator | Wednesday 07 January 2026 00:50:14 +0000 (0:00:00.734) 0:02:35.957 ***** 2026-01-07 00:58:41.838330 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.838334 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.838338 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.838343 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.838347 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.838352 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.838356 | orchestrator | 2026-01-07 00:58:41.838361 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-07 00:58:41.838368 | orchestrator | Wednesday 07 January 2026 00:50:14 +0000 (0:00:00.648) 0:02:36.605 ***** 2026-01-07 00:58:41.838373 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.838378 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.838382 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.838387 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.838391 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.838396 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.838400 | orchestrator | 2026-01-07 00:58:41.838405 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-07 00:58:41.838409 | orchestrator | Wednesday 07 January 2026 00:50:15 +0000 (0:00:00.915) 0:02:37.521 ***** 2026-01-07 00:58:41.838414 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.838419 | orchestrator | 2026-01-07 00:58:41.838424 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-07 00:58:41.838428 | orchestrator | Wednesday 07 January 2026 00:50:16 +0000 (0:00:00.933) 0:02:38.454 ***** 2026-01-07 00:58:41.838433 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-07 00:58:41.838438 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-07 00:58:41.838447 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-07 00:58:41.838452 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-07 00:58:41.838456 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-07 00:58:41.838461 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-07 00:58:41.838465 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-07 00:58:41.838470 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-07 00:58:41.838475 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-07 00:58:41.838513 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-07 00:58:41.838518 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-07 00:58:41.838523 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-07 00:58:41.838528 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-07 00:58:41.838532 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-07 00:58:41.838537 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-07 00:58:41.838541 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-07 00:58:41.838546 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-07 00:58:41.838551 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-07 00:58:41.838566 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-07 00:58:41.838571 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-07 00:58:41.838576 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-07 00:58:41.838580 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-07 00:58:41.838585 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-07 00:58:41.838589 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-07 00:58:41.838594 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-07 00:58:41.838599 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-07 00:58:41.838603 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-07 00:58:41.838608 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-07 00:58:41.838612 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-07 00:58:41.838617 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-07 00:58:41.838621 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-07 00:58:41.838626 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-07 00:58:41.838631 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-07 00:58:41.838635 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-07 00:58:41.838640 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-07 00:58:41.838644 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-07 00:58:41.838649 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-07 00:58:41.838653 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-07 00:58:41.838658 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-07 00:58:41.838663 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:58:41.838667 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-07 00:58:41.838672 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:58:41.838676 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-07 00:58:41.838681 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-07 00:58:41.838685 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:58:41.838690 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:58:41.838700 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:58:41.838704 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:58:41.838709 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:58:41.838713 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-07 00:58:41.838718 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:58:41.838725 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:58:41.838730 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:58:41.838735 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:58:41.838739 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:58:41.838743 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-07 00:58:41.838748 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:58:41.838753 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:58:41.838757 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:58:41.838762 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:58:41.838766 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:58:41.838771 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:58:41.838775 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:58:41.838780 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-07 00:58:41.838784 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:58:41.838789 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:58:41.838793 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:58:41.838798 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:58:41.838803 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:58:41.838807 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:58:41.838812 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-07 00:58:41.838816 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:58:41.838821 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:58:41.838826 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:58:41.838830 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:58:41.838835 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:58:41.838842 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-07 00:58:41.838848 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:58:41.838852 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-07 00:58:41.838857 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-07 00:58:41.838861 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:58:41.838866 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:58:41.838871 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-07 00:58:41.838875 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-07 00:58:41.838880 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:58:41.838884 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:58:41.838893 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-07 00:58:41.838898 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-07 00:58:41.838902 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-07 00:58:41.838907 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-07 00:58:41.838912 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-07 00:58:41.838916 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-07 00:58:41.838921 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-07 00:58:41.838926 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-07 00:58:41.838931 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-07 00:58:41.838935 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-07 00:58:41.838940 | orchestrator | 2026-01-07 00:58:41.838944 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-07 00:58:41.838949 | orchestrator | Wednesday 07 January 2026 00:50:24 +0000 (0:00:07.871) 0:02:46.326 ***** 2026-01-07 00:58:41.838953 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.838958 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.838962 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.838967 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.838972 | orchestrator | 2026-01-07 00:58:41.838976 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-07 00:58:41.838981 | orchestrator | Wednesday 07 January 2026 00:50:25 +0000 (0:00:00.951) 0:02:47.277 ***** 2026-01-07 00:58:41.838986 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-07 00:58:41.838991 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-07 00:58:41.838999 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-07 00:58:41.839004 | orchestrator | 2026-01-07 00:58:41.839009 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-07 00:58:41.839013 | orchestrator | Wednesday 07 January 2026 00:50:26 +0000 (0:00:00.947) 0:02:48.225 ***** 2026-01-07 00:58:41.839018 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-07 00:58:41.839022 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-07 00:58:41.839027 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-07 00:58:41.839032 | orchestrator | 2026-01-07 00:58:41.839036 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-07 00:58:41.839041 | orchestrator | Wednesday 07 January 2026 00:50:27 +0000 (0:00:01.212) 0:02:49.438 ***** 2026-01-07 00:58:41.839046 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.839050 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.839055 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.839060 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839064 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839069 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839074 | orchestrator | 2026-01-07 00:58:41.839078 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-07 00:58:41.839083 | orchestrator | Wednesday 07 January 2026 00:50:28 +0000 (0:00:00.695) 0:02:50.134 ***** 2026-01-07 00:58:41.839087 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.839093 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.839105 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.839116 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839124 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839132 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839139 | orchestrator | 2026-01-07 00:58:41.839146 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-07 00:58:41.839154 | orchestrator | Wednesday 07 January 2026 00:50:29 +0000 (0:00:00.859) 0:02:50.993 ***** 2026-01-07 00:58:41.839162 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.839169 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.839176 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.839183 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839191 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839198 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839206 | orchestrator | 2026-01-07 00:58:41.839220 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-07 00:58:41.839225 | orchestrator | Wednesday 07 January 2026 00:50:29 +0000 (0:00:00.561) 0:02:51.555 ***** 2026-01-07 00:58:41.839230 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.839235 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.839239 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.839244 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839248 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839253 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839257 | orchestrator | 2026-01-07 00:58:41.839262 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-07 00:58:41.839267 | orchestrator | Wednesday 07 January 2026 00:50:30 +0000 (0:00:00.881) 0:02:52.436 ***** 2026-01-07 00:58:41.839272 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.839276 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.839281 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.839286 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839290 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839295 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839300 | orchestrator | 2026-01-07 00:58:41.839305 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-07 00:58:41.839309 | orchestrator | Wednesday 07 January 2026 00:50:31 +0000 (0:00:00.558) 0:02:52.994 ***** 2026-01-07 00:58:41.839314 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.839319 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.839323 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.839328 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839333 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839338 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839342 | orchestrator | 2026-01-07 00:58:41.839347 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-07 00:58:41.839352 | orchestrator | Wednesday 07 January 2026 00:50:31 +0000 (0:00:00.698) 0:02:53.693 ***** 2026-01-07 00:58:41.839357 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.839362 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.839367 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.839372 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839376 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839381 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839386 | orchestrator | 2026-01-07 00:58:41.839391 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-07 00:58:41.839396 | orchestrator | Wednesday 07 January 2026 00:50:32 +0000 (0:00:00.538) 0:02:54.232 ***** 2026-01-07 00:58:41.839400 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.839405 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.839409 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.839419 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839424 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839429 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839433 | orchestrator | 2026-01-07 00:58:41.839439 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-07 00:58:41.839443 | orchestrator | Wednesday 07 January 2026 00:50:33 +0000 (0:00:00.660) 0:02:54.892 ***** 2026-01-07 00:58:41.839448 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839457 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839462 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839466 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.839471 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.839475 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.839517 | orchestrator | 2026-01-07 00:58:41.839523 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-07 00:58:41.839528 | orchestrator | Wednesday 07 January 2026 00:50:35 +0000 (0:00:02.815) 0:02:57.707 ***** 2026-01-07 00:58:41.839532 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.839537 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.839542 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.839547 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839551 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839556 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839560 | orchestrator | 2026-01-07 00:58:41.839565 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-07 00:58:41.839570 | orchestrator | Wednesday 07 January 2026 00:50:36 +0000 (0:00:00.745) 0:02:58.453 ***** 2026-01-07 00:58:41.839574 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.839579 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.839584 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.839588 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839593 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839597 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839602 | orchestrator | 2026-01-07 00:58:41.839607 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-07 00:58:41.839611 | orchestrator | Wednesday 07 January 2026 00:50:37 +0000 (0:00:00.771) 0:02:59.224 ***** 2026-01-07 00:58:41.839616 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.839620 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.839625 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.839630 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839634 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839639 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839643 | orchestrator | 2026-01-07 00:58:41.839648 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-07 00:58:41.839653 | orchestrator | Wednesday 07 January 2026 00:50:38 +0000 (0:00:00.797) 0:03:00.022 ***** 2026-01-07 00:58:41.839657 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-07 00:58:41.839662 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-07 00:58:41.839667 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-07 00:58:41.839671 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839680 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839685 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839690 | orchestrator | 2026-01-07 00:58:41.839695 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-07 00:58:41.839699 | orchestrator | Wednesday 07 January 2026 00:50:39 +0000 (0:00:00.911) 0:03:00.933 ***** 2026-01-07 00:58:41.839706 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-07 00:58:41.839721 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-07 00:58:41.839727 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.839732 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-07 00:58:41.839737 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-07 00:58:41.839741 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-07 00:58:41.839746 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-07 00:58:41.839754 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.839759 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.839763 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839768 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839772 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839777 | orchestrator | 2026-01-07 00:58:41.839781 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-07 00:58:41.839786 | orchestrator | Wednesday 07 January 2026 00:50:39 +0000 (0:00:00.700) 0:03:01.633 ***** 2026-01-07 00:58:41.839791 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.839795 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.839800 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.839804 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839809 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839813 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839818 | orchestrator | 2026-01-07 00:58:41.839823 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-07 00:58:41.839827 | orchestrator | Wednesday 07 January 2026 00:50:40 +0000 (0:00:00.446) 0:03:02.079 ***** 2026-01-07 00:58:41.839832 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.839836 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.839841 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.839846 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839850 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839855 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839859 | orchestrator | 2026-01-07 00:58:41.839864 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-07 00:58:41.839869 | orchestrator | Wednesday 07 January 2026 00:50:41 +0000 (0:00:00.782) 0:03:02.862 ***** 2026-01-07 00:58:41.839873 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.839878 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.839882 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.839890 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839895 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839899 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839904 | orchestrator | 2026-01-07 00:58:41.839908 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-07 00:58:41.839913 | orchestrator | Wednesday 07 January 2026 00:50:41 +0000 (0:00:00.679) 0:03:03.541 ***** 2026-01-07 00:58:41.839917 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.839922 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.839926 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.839931 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839935 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839940 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839944 | orchestrator | 2026-01-07 00:58:41.839949 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-07 00:58:41.839957 | orchestrator | Wednesday 07 January 2026 00:50:42 +0000 (0:00:00.895) 0:03:04.436 ***** 2026-01-07 00:58:41.839962 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.839967 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.839971 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.839976 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.839980 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.839985 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.839989 | orchestrator | 2026-01-07 00:58:41.839994 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-07 00:58:41.839999 | orchestrator | Wednesday 07 January 2026 00:50:43 +0000 (0:00:00.692) 0:03:05.129 ***** 2026-01-07 00:58:41.840003 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.840008 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.840012 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.840017 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.840022 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.840026 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.840031 | orchestrator | 2026-01-07 00:58:41.840036 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-07 00:58:41.840040 | orchestrator | Wednesday 07 January 2026 00:50:44 +0000 (0:00:00.735) 0:03:05.864 ***** 2026-01-07 00:58:41.840045 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:58:41.840050 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:58:41.840054 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:58:41.840059 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840063 | orchestrator | 2026-01-07 00:58:41.840068 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-07 00:58:41.840072 | orchestrator | Wednesday 07 January 2026 00:50:44 +0000 (0:00:00.330) 0:03:06.194 ***** 2026-01-07 00:58:41.840077 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:58:41.840081 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:58:41.840086 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:58:41.840091 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840095 | orchestrator | 2026-01-07 00:58:41.840100 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-07 00:58:41.840104 | orchestrator | Wednesday 07 January 2026 00:50:44 +0000 (0:00:00.420) 0:03:06.615 ***** 2026-01-07 00:58:41.840109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:58:41.840113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:58:41.840118 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:58:41.840122 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840127 | orchestrator | 2026-01-07 00:58:41.840132 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-07 00:58:41.840141 | orchestrator | Wednesday 07 January 2026 00:50:45 +0000 (0:00:00.446) 0:03:07.061 ***** 2026-01-07 00:58:41.840145 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.840150 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.840155 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.840159 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.840164 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.840168 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.840176 | orchestrator | 2026-01-07 00:58:41.840181 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-07 00:58:41.840185 | orchestrator | Wednesday 07 January 2026 00:50:46 +0000 (0:00:01.165) 0:03:08.227 ***** 2026-01-07 00:58:41.840190 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-07 00:58:41.840195 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-07 00:58:41.840199 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.840204 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-07 00:58:41.840208 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-07 00:58:41.840213 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.840217 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-07 00:58:41.840222 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-07 00:58:41.840226 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.840231 | orchestrator | 2026-01-07 00:58:41.840235 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-07 00:58:41.840240 | orchestrator | Wednesday 07 January 2026 00:50:48 +0000 (0:00:02.206) 0:03:10.434 ***** 2026-01-07 00:58:41.840244 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.840249 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.840253 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.840258 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.840262 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.840267 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.840271 | orchestrator | 2026-01-07 00:58:41.840276 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:58:41.840280 | orchestrator | Wednesday 07 January 2026 00:50:51 +0000 (0:00:03.033) 0:03:13.468 ***** 2026-01-07 00:58:41.840285 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.840289 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.840294 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.840298 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.840303 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.840307 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.840311 | orchestrator | 2026-01-07 00:58:41.840316 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-07 00:58:41.840321 | orchestrator | Wednesday 07 January 2026 00:50:52 +0000 (0:00:01.284) 0:03:14.752 ***** 2026-01-07 00:58:41.840325 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840330 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.840334 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.840339 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.840343 | orchestrator | 2026-01-07 00:58:41.840348 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-07 00:58:41.840356 | orchestrator | Wednesday 07 January 2026 00:50:53 +0000 (0:00:00.944) 0:03:15.697 ***** 2026-01-07 00:58:41.840360 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.840365 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.840370 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.840374 | orchestrator | 2026-01-07 00:58:41.840379 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-07 00:58:41.840384 | orchestrator | Wednesday 07 January 2026 00:50:54 +0000 (0:00:00.303) 0:03:16.001 ***** 2026-01-07 00:58:41.840388 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.840393 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.840401 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.840406 | orchestrator | 2026-01-07 00:58:41.840411 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-07 00:58:41.840415 | orchestrator | Wednesday 07 January 2026 00:50:55 +0000 (0:00:01.194) 0:03:17.196 ***** 2026-01-07 00:58:41.840420 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:58:41.840424 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:58:41.840429 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:58:41.840434 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.840438 | orchestrator | 2026-01-07 00:58:41.840443 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-07 00:58:41.840447 | orchestrator | Wednesday 07 January 2026 00:50:56 +0000 (0:00:01.004) 0:03:18.200 ***** 2026-01-07 00:58:41.840452 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.840456 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.840461 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.840465 | orchestrator | 2026-01-07 00:58:41.840470 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-07 00:58:41.840475 | orchestrator | Wednesday 07 January 2026 00:50:56 +0000 (0:00:00.413) 0:03:18.613 ***** 2026-01-07 00:58:41.840496 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.840500 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.840505 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.840509 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.840514 | orchestrator | 2026-01-07 00:58:41.840519 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-07 00:58:41.840523 | orchestrator | Wednesday 07 January 2026 00:50:58 +0000 (0:00:01.254) 0:03:19.868 ***** 2026-01-07 00:58:41.840528 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:58:41.840532 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:58:41.840537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:58:41.840542 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840546 | orchestrator | 2026-01-07 00:58:41.840551 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-07 00:58:41.840556 | orchestrator | Wednesday 07 January 2026 00:50:58 +0000 (0:00:00.423) 0:03:20.291 ***** 2026-01-07 00:58:41.840561 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840565 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.840570 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.840575 | orchestrator | 2026-01-07 00:58:41.840579 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-07 00:58:41.840587 | orchestrator | Wednesday 07 January 2026 00:50:58 +0000 (0:00:00.388) 0:03:20.680 ***** 2026-01-07 00:58:41.840592 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840596 | orchestrator | 2026-01-07 00:58:41.840601 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-07 00:58:41.840605 | orchestrator | Wednesday 07 January 2026 00:50:59 +0000 (0:00:00.270) 0:03:20.951 ***** 2026-01-07 00:58:41.840610 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840614 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.840619 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.840623 | orchestrator | 2026-01-07 00:58:41.840628 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-07 00:58:41.840633 | orchestrator | Wednesday 07 January 2026 00:50:59 +0000 (0:00:00.311) 0:03:21.262 ***** 2026-01-07 00:58:41.840637 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840642 | orchestrator | 2026-01-07 00:58:41.840647 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-07 00:58:41.840651 | orchestrator | Wednesday 07 January 2026 00:50:59 +0000 (0:00:00.221) 0:03:21.484 ***** 2026-01-07 00:58:41.840660 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840664 | orchestrator | 2026-01-07 00:58:41.840669 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-07 00:58:41.840674 | orchestrator | Wednesday 07 January 2026 00:50:59 +0000 (0:00:00.223) 0:03:21.707 ***** 2026-01-07 00:58:41.840679 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840684 | orchestrator | 2026-01-07 00:58:41.840688 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-07 00:58:41.840693 | orchestrator | Wednesday 07 January 2026 00:51:00 +0000 (0:00:00.111) 0:03:21.819 ***** 2026-01-07 00:58:41.840697 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840702 | orchestrator | 2026-01-07 00:58:41.840706 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-07 00:58:41.840711 | orchestrator | Wednesday 07 January 2026 00:51:00 +0000 (0:00:00.627) 0:03:22.447 ***** 2026-01-07 00:58:41.840716 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840720 | orchestrator | 2026-01-07 00:58:41.840725 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-07 00:58:41.840729 | orchestrator | Wednesday 07 January 2026 00:51:00 +0000 (0:00:00.191) 0:03:22.638 ***** 2026-01-07 00:58:41.840734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:58:41.840739 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:58:41.840743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:58:41.840748 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840753 | orchestrator | 2026-01-07 00:58:41.840758 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-07 00:58:41.840766 | orchestrator | Wednesday 07 January 2026 00:51:01 +0000 (0:00:00.333) 0:03:22.971 ***** 2026-01-07 00:58:41.840771 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840776 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.840780 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.840785 | orchestrator | 2026-01-07 00:58:41.840789 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-07 00:58:41.840794 | orchestrator | Wednesday 07 January 2026 00:51:01 +0000 (0:00:00.238) 0:03:23.209 ***** 2026-01-07 00:58:41.840799 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840803 | orchestrator | 2026-01-07 00:58:41.840808 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-07 00:58:41.840813 | orchestrator | Wednesday 07 January 2026 00:51:01 +0000 (0:00:00.175) 0:03:23.385 ***** 2026-01-07 00:58:41.840817 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840822 | orchestrator | 2026-01-07 00:58:41.840826 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-07 00:58:41.840831 | orchestrator | Wednesday 07 January 2026 00:51:01 +0000 (0:00:00.167) 0:03:23.553 ***** 2026-01-07 00:58:41.840836 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.840841 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.840846 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.840850 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.840855 | orchestrator | 2026-01-07 00:58:41.840860 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-07 00:58:41.840865 | orchestrator | Wednesday 07 January 2026 00:51:02 +0000 (0:00:00.896) 0:03:24.449 ***** 2026-01-07 00:58:41.840869 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.840874 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.840878 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.840883 | orchestrator | 2026-01-07 00:58:41.840888 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-07 00:58:41.840892 | orchestrator | Wednesday 07 January 2026 00:51:02 +0000 (0:00:00.306) 0:03:24.755 ***** 2026-01-07 00:58:41.840897 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.840901 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.840909 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.840914 | orchestrator | 2026-01-07 00:58:41.840918 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-07 00:58:41.840923 | orchestrator | Wednesday 07 January 2026 00:51:04 +0000 (0:00:01.206) 0:03:25.962 ***** 2026-01-07 00:58:41.840927 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:58:41.840932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:58:41.840937 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:58:41.840941 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.840946 | orchestrator | 2026-01-07 00:58:41.840950 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-07 00:58:41.840955 | orchestrator | Wednesday 07 January 2026 00:51:04 +0000 (0:00:00.696) 0:03:26.658 ***** 2026-01-07 00:58:41.840960 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.840964 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.840969 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.840973 | orchestrator | 2026-01-07 00:58:41.840980 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-07 00:58:41.840985 | orchestrator | Wednesday 07 January 2026 00:51:05 +0000 (0:00:00.417) 0:03:27.076 ***** 2026-01-07 00:58:41.840989 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.840994 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.840998 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.841003 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.841008 | orchestrator | 2026-01-07 00:58:41.841012 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-07 00:58:41.841017 | orchestrator | Wednesday 07 January 2026 00:51:05 +0000 (0:00:00.723) 0:03:27.799 ***** 2026-01-07 00:58:41.841022 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.841026 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.841031 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.841036 | orchestrator | 2026-01-07 00:58:41.841041 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-07 00:58:41.841045 | orchestrator | Wednesday 07 January 2026 00:51:06 +0000 (0:00:00.419) 0:03:28.218 ***** 2026-01-07 00:58:41.841050 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.841055 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.841059 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.841064 | orchestrator | 2026-01-07 00:58:41.841068 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-07 00:58:41.841073 | orchestrator | Wednesday 07 January 2026 00:51:07 +0000 (0:00:01.014) 0:03:29.233 ***** 2026-01-07 00:58:41.841078 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:58:41.841083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:58:41.841087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:58:41.841092 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.841096 | orchestrator | 2026-01-07 00:58:41.841101 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-07 00:58:41.841105 | orchestrator | Wednesday 07 January 2026 00:51:08 +0000 (0:00:00.615) 0:03:29.848 ***** 2026-01-07 00:58:41.841110 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.841115 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.841119 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.841124 | orchestrator | 2026-01-07 00:58:41.841129 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-07 00:58:41.841133 | orchestrator | Wednesday 07 January 2026 00:51:08 +0000 (0:00:00.328) 0:03:30.177 ***** 2026-01-07 00:58:41.841138 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.841143 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.841147 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.841155 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.841160 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.841169 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.841174 | orchestrator | 2026-01-07 00:58:41.841179 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-07 00:58:41.841184 | orchestrator | Wednesday 07 January 2026 00:51:09 +0000 (0:00:00.881) 0:03:31.059 ***** 2026-01-07 00:58:41.841188 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.841193 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.841197 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.841202 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.841207 | orchestrator | 2026-01-07 00:58:41.841211 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-07 00:58:41.841216 | orchestrator | Wednesday 07 January 2026 00:51:10 +0000 (0:00:00.842) 0:03:31.902 ***** 2026-01-07 00:58:41.841220 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.841225 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.841229 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.841234 | orchestrator | 2026-01-07 00:58:41.841238 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-07 00:58:41.841243 | orchestrator | Wednesday 07 January 2026 00:51:10 +0000 (0:00:00.578) 0:03:32.480 ***** 2026-01-07 00:58:41.841247 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.841252 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.841257 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.841261 | orchestrator | 2026-01-07 00:58:41.841266 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-07 00:58:41.841270 | orchestrator | Wednesday 07 January 2026 00:51:11 +0000 (0:00:01.069) 0:03:33.550 ***** 2026-01-07 00:58:41.841275 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:58:41.841280 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:58:41.841284 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:58:41.841289 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.841293 | orchestrator | 2026-01-07 00:58:41.841298 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-07 00:58:41.841302 | orchestrator | Wednesday 07 January 2026 00:51:12 +0000 (0:00:00.598) 0:03:34.148 ***** 2026-01-07 00:58:41.841307 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.841311 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.841316 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.841321 | orchestrator | 2026-01-07 00:58:41.841325 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-07 00:58:41.841330 | orchestrator | 2026-01-07 00:58:41.841334 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:58:41.841339 | orchestrator | Wednesday 07 January 2026 00:51:12 +0000 (0:00:00.550) 0:03:34.699 ***** 2026-01-07 00:58:41.841343 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.841348 | orchestrator | 2026-01-07 00:58:41.841353 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:58:41.841357 | orchestrator | Wednesday 07 January 2026 00:51:13 +0000 (0:00:00.698) 0:03:35.398 ***** 2026-01-07 00:58:41.841365 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.841369 | orchestrator | 2026-01-07 00:58:41.841374 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:58:41.841379 | orchestrator | Wednesday 07 January 2026 00:51:14 +0000 (0:00:00.459) 0:03:35.857 ***** 2026-01-07 00:58:41.841384 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.841388 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.841393 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.841401 | orchestrator | 2026-01-07 00:58:41.841406 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:58:41.841411 | orchestrator | Wednesday 07 January 2026 00:51:14 +0000 (0:00:00.891) 0:03:36.748 ***** 2026-01-07 00:58:41.841416 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.841421 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.841425 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.841430 | orchestrator | 2026-01-07 00:58:41.841435 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:58:41.841439 | orchestrator | Wednesday 07 January 2026 00:51:15 +0000 (0:00:00.429) 0:03:37.178 ***** 2026-01-07 00:58:41.841444 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.841449 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.841453 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.841458 | orchestrator | 2026-01-07 00:58:41.841462 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:58:41.841467 | orchestrator | Wednesday 07 January 2026 00:51:15 +0000 (0:00:00.477) 0:03:37.656 ***** 2026-01-07 00:58:41.841471 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.841476 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.841491 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.841496 | orchestrator | 2026-01-07 00:58:41.841501 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:58:41.841505 | orchestrator | Wednesday 07 January 2026 00:51:16 +0000 (0:00:00.283) 0:03:37.939 ***** 2026-01-07 00:58:41.841510 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.841515 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.841519 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.841524 | orchestrator | 2026-01-07 00:58:41.841528 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:58:41.841533 | orchestrator | Wednesday 07 January 2026 00:51:16 +0000 (0:00:00.807) 0:03:38.746 ***** 2026-01-07 00:58:41.841538 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.841542 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.841547 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.841551 | orchestrator | 2026-01-07 00:58:41.841556 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:58:41.841561 | orchestrator | Wednesday 07 January 2026 00:51:17 +0000 (0:00:00.260) 0:03:39.007 ***** 2026-01-07 00:58:41.841570 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.841574 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.841579 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.841584 | orchestrator | 2026-01-07 00:58:41.841588 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:58:41.841593 | orchestrator | Wednesday 07 January 2026 00:51:17 +0000 (0:00:00.253) 0:03:39.260 ***** 2026-01-07 00:58:41.841597 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.841602 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.841607 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.841611 | orchestrator | 2026-01-07 00:58:41.841616 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:58:41.841620 | orchestrator | Wednesday 07 January 2026 00:51:18 +0000 (0:00:00.669) 0:03:39.930 ***** 2026-01-07 00:58:41.841625 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.841630 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.841634 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.841639 | orchestrator | 2026-01-07 00:58:41.841644 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:58:41.841648 | orchestrator | Wednesday 07 January 2026 00:51:18 +0000 (0:00:00.758) 0:03:40.688 ***** 2026-01-07 00:58:41.841653 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.841658 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.841663 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.841667 | orchestrator | 2026-01-07 00:58:41.841672 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:58:41.841680 | orchestrator | Wednesday 07 January 2026 00:51:19 +0000 (0:00:00.262) 0:03:40.950 ***** 2026-01-07 00:58:41.841685 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.841690 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.841695 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.841699 | orchestrator | 2026-01-07 00:58:41.841704 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:58:41.841708 | orchestrator | Wednesday 07 January 2026 00:51:19 +0000 (0:00:00.298) 0:03:41.248 ***** 2026-01-07 00:58:41.841713 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.841718 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.841722 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.841727 | orchestrator | 2026-01-07 00:58:41.841731 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:58:41.841736 | orchestrator | Wednesday 07 January 2026 00:51:19 +0000 (0:00:00.284) 0:03:41.532 ***** 2026-01-07 00:58:41.841740 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.841745 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.841749 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.841754 | orchestrator | 2026-01-07 00:58:41.841759 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:58:41.841764 | orchestrator | Wednesday 07 January 2026 00:51:19 +0000 (0:00:00.222) 0:03:41.754 ***** 2026-01-07 00:58:41.841768 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.841773 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.841777 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.841782 | orchestrator | 2026-01-07 00:58:41.841786 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:58:41.841791 | orchestrator | Wednesday 07 January 2026 00:51:20 +0000 (0:00:00.488) 0:03:42.243 ***** 2026-01-07 00:58:41.841796 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.841804 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.841809 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.841813 | orchestrator | 2026-01-07 00:58:41.841818 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:58:41.841824 | orchestrator | Wednesday 07 January 2026 00:51:20 +0000 (0:00:00.238) 0:03:42.481 ***** 2026-01-07 00:58:41.841828 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.841833 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.841837 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.841842 | orchestrator | 2026-01-07 00:58:41.841847 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:58:41.841852 | orchestrator | Wednesday 07 January 2026 00:51:20 +0000 (0:00:00.317) 0:03:42.799 ***** 2026-01-07 00:58:41.841856 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.841861 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.841865 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.841870 | orchestrator | 2026-01-07 00:58:41.841875 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:58:41.841879 | orchestrator | Wednesday 07 January 2026 00:51:21 +0000 (0:00:00.280) 0:03:43.080 ***** 2026-01-07 00:58:41.841884 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.841889 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.841893 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.841898 | orchestrator | 2026-01-07 00:58:41.841903 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:58:41.841907 | orchestrator | Wednesday 07 January 2026 00:51:21 +0000 (0:00:00.469) 0:03:43.549 ***** 2026-01-07 00:58:41.841912 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.841916 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.841921 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.841925 | orchestrator | 2026-01-07 00:58:41.841930 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-07 00:58:41.841935 | orchestrator | Wednesday 07 January 2026 00:51:22 +0000 (0:00:00.429) 0:03:43.979 ***** 2026-01-07 00:58:41.841945 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.841950 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.841954 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.841959 | orchestrator | 2026-01-07 00:58:41.841963 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-07 00:58:41.841968 | orchestrator | Wednesday 07 January 2026 00:51:22 +0000 (0:00:00.236) 0:03:44.215 ***** 2026-01-07 00:58:41.841972 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.841977 | orchestrator | 2026-01-07 00:58:41.841982 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-07 00:58:41.841986 | orchestrator | Wednesday 07 January 2026 00:51:23 +0000 (0:00:00.640) 0:03:44.856 ***** 2026-01-07 00:58:41.841991 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.841996 | orchestrator | 2026-01-07 00:58:41.842005 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-07 00:58:41.842010 | orchestrator | Wednesday 07 January 2026 00:51:23 +0000 (0:00:00.136) 0:03:44.992 ***** 2026-01-07 00:58:41.842048 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-07 00:58:41.842052 | orchestrator | 2026-01-07 00:58:41.842057 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-07 00:58:41.842062 | orchestrator | Wednesday 07 January 2026 00:51:24 +0000 (0:00:00.891) 0:03:45.883 ***** 2026-01-07 00:58:41.842066 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.842071 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.842075 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.842080 | orchestrator | 2026-01-07 00:58:41.842085 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-07 00:58:41.842090 | orchestrator | Wednesday 07 January 2026 00:51:24 +0000 (0:00:00.388) 0:03:46.272 ***** 2026-01-07 00:58:41.842095 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.842100 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.842105 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.842110 | orchestrator | 2026-01-07 00:58:41.842115 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-07 00:58:41.842120 | orchestrator | Wednesday 07 January 2026 00:51:24 +0000 (0:00:00.362) 0:03:46.635 ***** 2026-01-07 00:58:41.842124 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.842129 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.842133 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.842138 | orchestrator | 2026-01-07 00:58:41.842143 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-07 00:58:41.842147 | orchestrator | Wednesday 07 January 2026 00:51:26 +0000 (0:00:01.373) 0:03:48.008 ***** 2026-01-07 00:58:41.842152 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.842156 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.842161 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.842165 | orchestrator | 2026-01-07 00:58:41.842170 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-07 00:58:41.842175 | orchestrator | Wednesday 07 January 2026 00:51:27 +0000 (0:00:00.876) 0:03:48.885 ***** 2026-01-07 00:58:41.842179 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.842184 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.842188 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.842193 | orchestrator | 2026-01-07 00:58:41.842198 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-07 00:58:41.842202 | orchestrator | Wednesday 07 January 2026 00:51:27 +0000 (0:00:00.759) 0:03:49.644 ***** 2026-01-07 00:58:41.842207 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.842212 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.842216 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.842221 | orchestrator | 2026-01-07 00:58:41.842225 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-07 00:58:41.842235 | orchestrator | Wednesday 07 January 2026 00:51:28 +0000 (0:00:00.720) 0:03:50.365 ***** 2026-01-07 00:58:41.842240 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.842245 | orchestrator | 2026-01-07 00:58:41.842249 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-07 00:58:41.842254 | orchestrator | Wednesday 07 January 2026 00:51:30 +0000 (0:00:01.515) 0:03:51.881 ***** 2026-01-07 00:58:41.842259 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.842263 | orchestrator | 2026-01-07 00:58:41.842271 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-07 00:58:41.842276 | orchestrator | Wednesday 07 January 2026 00:51:31 +0000 (0:00:01.301) 0:03:53.182 ***** 2026-01-07 00:58:41.842281 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 00:58:41.842286 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:58:41.842291 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:58:41.842295 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 00:58:41.842300 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-07 00:58:41.842305 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 00:58:41.842309 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 00:58:41.842314 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-01-07 00:58:41.842319 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 00:58:41.842323 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-01-07 00:58:41.842328 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-07 00:58:41.842332 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-07 00:58:41.842337 | orchestrator | 2026-01-07 00:58:41.842341 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-07 00:58:41.842346 | orchestrator | Wednesday 07 January 2026 00:51:34 +0000 (0:00:03.178) 0:03:56.361 ***** 2026-01-07 00:58:41.842351 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.842355 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.842360 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.842365 | orchestrator | 2026-01-07 00:58:41.842369 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-07 00:58:41.842374 | orchestrator | Wednesday 07 January 2026 00:51:35 +0000 (0:00:01.193) 0:03:57.555 ***** 2026-01-07 00:58:41.842378 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.842383 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.842387 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.842392 | orchestrator | 2026-01-07 00:58:41.842397 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-07 00:58:41.842401 | orchestrator | Wednesday 07 January 2026 00:51:36 +0000 (0:00:00.329) 0:03:57.885 ***** 2026-01-07 00:58:41.842406 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.842411 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.842415 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.842420 | orchestrator | 2026-01-07 00:58:41.842425 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-07 00:58:41.842429 | orchestrator | Wednesday 07 January 2026 00:51:36 +0000 (0:00:00.562) 0:03:58.447 ***** 2026-01-07 00:58:41.842434 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.842449 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.842454 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.842459 | orchestrator | 2026-01-07 00:58:41.842463 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-07 00:58:41.842469 | orchestrator | Wednesday 07 January 2026 00:51:38 +0000 (0:00:01.655) 0:04:00.102 ***** 2026-01-07 00:58:41.842473 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.842498 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.842503 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.842508 | orchestrator | 2026-01-07 00:58:41.842517 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-07 00:58:41.842522 | orchestrator | Wednesday 07 January 2026 00:51:39 +0000 (0:00:01.528) 0:04:01.630 ***** 2026-01-07 00:58:41.842527 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.842532 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.842536 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.842541 | orchestrator | 2026-01-07 00:58:41.842546 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-07 00:58:41.842550 | orchestrator | Wednesday 07 January 2026 00:51:40 +0000 (0:00:00.345) 0:04:01.976 ***** 2026-01-07 00:58:41.842555 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.842559 | orchestrator | 2026-01-07 00:58:41.842564 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-07 00:58:41.842568 | orchestrator | Wednesday 07 January 2026 00:51:41 +0000 (0:00:00.896) 0:04:02.873 ***** 2026-01-07 00:58:41.842574 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.842578 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.842583 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.842587 | orchestrator | 2026-01-07 00:58:41.842592 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-07 00:58:41.842597 | orchestrator | Wednesday 07 January 2026 00:51:41 +0000 (0:00:00.315) 0:04:03.189 ***** 2026-01-07 00:58:41.842602 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.842607 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.842612 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.842617 | orchestrator | 2026-01-07 00:58:41.842622 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-07 00:58:41.842626 | orchestrator | Wednesday 07 January 2026 00:51:41 +0000 (0:00:00.325) 0:04:03.514 ***** 2026-01-07 00:58:41.842631 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.842636 | orchestrator | 2026-01-07 00:58:41.842641 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-07 00:58:41.842645 | orchestrator | Wednesday 07 January 2026 00:51:42 +0000 (0:00:00.815) 0:04:04.330 ***** 2026-01-07 00:58:41.842651 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.842655 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.842660 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.842664 | orchestrator | 2026-01-07 00:58:41.842669 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-07 00:58:41.842673 | orchestrator | Wednesday 07 January 2026 00:51:44 +0000 (0:00:01.740) 0:04:06.070 ***** 2026-01-07 00:58:41.842681 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.842685 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.842690 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.842695 | orchestrator | 2026-01-07 00:58:41.842700 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-07 00:58:41.842704 | orchestrator | Wednesday 07 January 2026 00:51:45 +0000 (0:00:01.065) 0:04:07.136 ***** 2026-01-07 00:58:41.842709 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.842713 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.842718 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.842723 | orchestrator | 2026-01-07 00:58:41.842727 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-07 00:58:41.842732 | orchestrator | Wednesday 07 January 2026 00:51:46 +0000 (0:00:01.567) 0:04:08.703 ***** 2026-01-07 00:58:41.842737 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.842741 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.842746 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.842750 | orchestrator | 2026-01-07 00:58:41.842755 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-07 00:58:41.842759 | orchestrator | Wednesday 07 January 2026 00:51:48 +0000 (0:00:01.854) 0:04:10.558 ***** 2026-01-07 00:58:41.842768 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.842773 | orchestrator | 2026-01-07 00:58:41.842777 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-07 00:58:41.842782 | orchestrator | Wednesday 07 January 2026 00:51:49 +0000 (0:00:00.872) 0:04:11.430 ***** 2026-01-07 00:58:41.842786 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-07 00:58:41.842791 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.842795 | orchestrator | 2026-01-07 00:58:41.842800 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-07 00:58:41.842804 | orchestrator | Wednesday 07 January 2026 00:52:11 +0000 (0:00:21.931) 0:04:33.362 ***** 2026-01-07 00:58:41.842809 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.842814 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.842818 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.842823 | orchestrator | 2026-01-07 00:58:41.842827 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-07 00:58:41.842832 | orchestrator | Wednesday 07 January 2026 00:52:19 +0000 (0:00:08.040) 0:04:41.403 ***** 2026-01-07 00:58:41.842837 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.842841 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.842846 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.842851 | orchestrator | 2026-01-07 00:58:41.842855 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-07 00:58:41.842865 | orchestrator | Wednesday 07 January 2026 00:52:20 +0000 (0:00:00.571) 0:04:41.974 ***** 2026-01-07 00:58:41.842872 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77f529a11a54df92bd2986db284e2fbf7965d963'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-07 00:58:41.842879 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77f529a11a54df92bd2986db284e2fbf7965d963'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-07 00:58:41.842885 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77f529a11a54df92bd2986db284e2fbf7965d963'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-07 00:58:41.842892 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77f529a11a54df92bd2986db284e2fbf7965d963'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-07 00:58:41.842898 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77f529a11a54df92bd2986db284e2fbf7965d963'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-07 00:58:41.842907 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77f529a11a54df92bd2986db284e2fbf7965d963'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__77f529a11a54df92bd2986db284e2fbf7965d963'}])  2026-01-07 00:58:41.842917 | orchestrator | 2026-01-07 00:58:41.842922 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:58:41.842926 | orchestrator | Wednesday 07 January 2026 00:52:34 +0000 (0:00:14.468) 0:04:56.443 ***** 2026-01-07 00:58:41.842931 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.842936 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.842941 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.842945 | orchestrator | 2026-01-07 00:58:41.842950 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-07 00:58:41.842954 | orchestrator | Wednesday 07 January 2026 00:52:34 +0000 (0:00:00.319) 0:04:56.762 ***** 2026-01-07 00:58:41.842959 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.842964 | orchestrator | 2026-01-07 00:58:41.842968 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-07 00:58:41.842973 | orchestrator | Wednesday 07 January 2026 00:52:35 +0000 (0:00:00.827) 0:04:57.590 ***** 2026-01-07 00:58:41.842977 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.842982 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.842987 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.842992 | orchestrator | 2026-01-07 00:58:41.842996 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-07 00:58:41.843001 | orchestrator | Wednesday 07 January 2026 00:52:36 +0000 (0:00:00.373) 0:04:57.964 ***** 2026-01-07 00:58:41.843006 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.843010 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.843015 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.843020 | orchestrator | 2026-01-07 00:58:41.843024 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-07 00:58:41.843029 | orchestrator | Wednesday 07 January 2026 00:52:36 +0000 (0:00:00.471) 0:04:58.436 ***** 2026-01-07 00:58:41.843034 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:58:41.843038 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:58:41.843043 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:58:41.843048 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.843052 | orchestrator | 2026-01-07 00:58:41.843057 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-07 00:58:41.843061 | orchestrator | Wednesday 07 January 2026 00:52:37 +0000 (0:00:01.050) 0:04:59.486 ***** 2026-01-07 00:58:41.843066 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.843071 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.843079 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.843084 | orchestrator | 2026-01-07 00:58:41.843088 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-07 00:58:41.843093 | orchestrator | 2026-01-07 00:58:41.843098 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:58:41.843102 | orchestrator | Wednesday 07 January 2026 00:52:38 +0000 (0:00:00.856) 0:05:00.342 ***** 2026-01-07 00:58:41.843107 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.843112 | orchestrator | 2026-01-07 00:58:41.843116 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:58:41.843121 | orchestrator | Wednesday 07 January 2026 00:52:39 +0000 (0:00:00.570) 0:05:00.913 ***** 2026-01-07 00:58:41.843126 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.843130 | orchestrator | 2026-01-07 00:58:41.843135 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:58:41.843143 | orchestrator | Wednesday 07 January 2026 00:52:39 +0000 (0:00:00.832) 0:05:01.745 ***** 2026-01-07 00:58:41.843148 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.843152 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.843157 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.843161 | orchestrator | 2026-01-07 00:58:41.843166 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:58:41.843171 | orchestrator | Wednesday 07 January 2026 00:52:40 +0000 (0:00:00.877) 0:05:02.622 ***** 2026-01-07 00:58:41.843175 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.843180 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.843185 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.843189 | orchestrator | 2026-01-07 00:58:41.843194 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:58:41.843198 | orchestrator | Wednesday 07 January 2026 00:52:41 +0000 (0:00:00.347) 0:05:02.969 ***** 2026-01-07 00:58:41.843203 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.843208 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.843212 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.843217 | orchestrator | 2026-01-07 00:58:41.843221 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:58:41.843226 | orchestrator | Wednesday 07 January 2026 00:52:41 +0000 (0:00:00.673) 0:05:03.643 ***** 2026-01-07 00:58:41.843230 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.843235 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.843239 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.843244 | orchestrator | 2026-01-07 00:58:41.843248 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:58:41.843253 | orchestrator | Wednesday 07 January 2026 00:52:42 +0000 (0:00:00.372) 0:05:04.015 ***** 2026-01-07 00:58:41.843258 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.843262 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.843267 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.843271 | orchestrator | 2026-01-07 00:58:41.843276 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:58:41.843283 | orchestrator | Wednesday 07 January 2026 00:52:42 +0000 (0:00:00.717) 0:05:04.733 ***** 2026-01-07 00:58:41.843288 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.843293 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.843297 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.843302 | orchestrator | 2026-01-07 00:58:41.843307 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:58:41.843311 | orchestrator | Wednesday 07 January 2026 00:52:43 +0000 (0:00:00.364) 0:05:05.098 ***** 2026-01-07 00:58:41.843316 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.843320 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.843325 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.843329 | orchestrator | 2026-01-07 00:58:41.843334 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:58:41.843339 | orchestrator | Wednesday 07 January 2026 00:52:43 +0000 (0:00:00.555) 0:05:05.653 ***** 2026-01-07 00:58:41.843343 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.843348 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.843353 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.843357 | orchestrator | 2026-01-07 00:58:41.843362 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:58:41.843366 | orchestrator | Wednesday 07 January 2026 00:52:44 +0000 (0:00:00.877) 0:05:06.531 ***** 2026-01-07 00:58:41.843371 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.843376 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.843380 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.843385 | orchestrator | 2026-01-07 00:58:41.843390 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:58:41.843394 | orchestrator | Wednesday 07 January 2026 00:52:45 +0000 (0:00:00.870) 0:05:07.401 ***** 2026-01-07 00:58:41.843402 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.843407 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.843411 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.843416 | orchestrator | 2026-01-07 00:58:41.843420 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:58:41.843425 | orchestrator | Wednesday 07 January 2026 00:52:45 +0000 (0:00:00.309) 0:05:07.711 ***** 2026-01-07 00:58:41.843430 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.843434 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.843439 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.843443 | orchestrator | 2026-01-07 00:58:41.843448 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:58:41.843452 | orchestrator | Wednesday 07 January 2026 00:52:46 +0000 (0:00:00.406) 0:05:08.117 ***** 2026-01-07 00:58:41.843457 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.843462 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.843466 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.843471 | orchestrator | 2026-01-07 00:58:41.843476 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:58:41.843496 | orchestrator | Wednesday 07 January 2026 00:52:46 +0000 (0:00:00.580) 0:05:08.697 ***** 2026-01-07 00:58:41.843500 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.843505 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.843510 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.843515 | orchestrator | 2026-01-07 00:58:41.843519 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:58:41.843524 | orchestrator | Wednesday 07 January 2026 00:52:47 +0000 (0:00:00.426) 0:05:09.124 ***** 2026-01-07 00:58:41.843528 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.843533 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.843538 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.843542 | orchestrator | 2026-01-07 00:58:41.843547 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:58:41.843551 | orchestrator | Wednesday 07 January 2026 00:52:47 +0000 (0:00:00.383) 0:05:09.507 ***** 2026-01-07 00:58:41.843556 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.843561 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.843566 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.843570 | orchestrator | 2026-01-07 00:58:41.843575 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:58:41.843580 | orchestrator | Wednesday 07 January 2026 00:52:48 +0000 (0:00:00.368) 0:05:09.876 ***** 2026-01-07 00:58:41.843584 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.843589 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.843593 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.843598 | orchestrator | 2026-01-07 00:58:41.843603 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:58:41.843607 | orchestrator | Wednesday 07 January 2026 00:52:48 +0000 (0:00:00.562) 0:05:10.439 ***** 2026-01-07 00:58:41.843612 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.843616 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.843621 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.843626 | orchestrator | 2026-01-07 00:58:41.843630 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:58:41.843635 | orchestrator | Wednesday 07 January 2026 00:52:48 +0000 (0:00:00.344) 0:05:10.784 ***** 2026-01-07 00:58:41.843640 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.843644 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.843649 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.843654 | orchestrator | 2026-01-07 00:58:41.843658 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:58:41.843663 | orchestrator | Wednesday 07 January 2026 00:52:49 +0000 (0:00:00.351) 0:05:11.135 ***** 2026-01-07 00:58:41.843667 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.843672 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.843681 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.843685 | orchestrator | 2026-01-07 00:58:41.843690 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-07 00:58:41.843694 | orchestrator | Wednesday 07 January 2026 00:52:50 +0000 (0:00:00.868) 0:05:12.003 ***** 2026-01-07 00:58:41.843699 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-07 00:58:41.843704 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:58:41.843708 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:58:41.843713 | orchestrator | 2026-01-07 00:58:41.843720 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-07 00:58:41.843725 | orchestrator | Wednesday 07 January 2026 00:52:51 +0000 (0:00:00.851) 0:05:12.855 ***** 2026-01-07 00:58:41.843730 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.843735 | orchestrator | 2026-01-07 00:58:41.843739 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-07 00:58:41.843744 | orchestrator | Wednesday 07 January 2026 00:52:51 +0000 (0:00:00.543) 0:05:13.398 ***** 2026-01-07 00:58:41.843748 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.843753 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.843758 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.843762 | orchestrator | 2026-01-07 00:58:41.843767 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-07 00:58:41.843771 | orchestrator | Wednesday 07 January 2026 00:52:52 +0000 (0:00:00.801) 0:05:14.200 ***** 2026-01-07 00:58:41.843776 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.843780 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.843785 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.843789 | orchestrator | 2026-01-07 00:58:41.843794 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-07 00:58:41.843799 | orchestrator | Wednesday 07 January 2026 00:52:53 +0000 (0:00:00.630) 0:05:14.831 ***** 2026-01-07 00:58:41.843803 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 00:58:41.843808 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 00:58:41.843819 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 00:58:41.843824 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-07 00:58:41.843829 | orchestrator | 2026-01-07 00:58:41.843834 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-07 00:58:41.843838 | orchestrator | Wednesday 07 January 2026 00:53:03 +0000 (0:00:10.529) 0:05:25.360 ***** 2026-01-07 00:58:41.843843 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.843848 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.843852 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.843857 | orchestrator | 2026-01-07 00:58:41.843862 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-07 00:58:41.843866 | orchestrator | Wednesday 07 January 2026 00:53:03 +0000 (0:00:00.350) 0:05:25.710 ***** 2026-01-07 00:58:41.843871 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-07 00:58:41.843875 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-07 00:58:41.843880 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-07 00:58:41.843884 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-07 00:58:41.843889 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:58:41.843898 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:58:41.843904 | orchestrator | 2026-01-07 00:58:41.843908 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-07 00:58:41.843913 | orchestrator | Wednesday 07 January 2026 00:53:06 +0000 (0:00:02.563) 0:05:28.273 ***** 2026-01-07 00:58:41.843917 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-07 00:58:41.843926 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-07 00:58:41.843931 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-07 00:58:41.843935 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 00:58:41.843940 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-07 00:58:41.843944 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-07 00:58:41.843949 | orchestrator | 2026-01-07 00:58:41.843954 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-07 00:58:41.843958 | orchestrator | Wednesday 07 January 2026 00:53:07 +0000 (0:00:01.413) 0:05:29.687 ***** 2026-01-07 00:58:41.843963 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.843967 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.843972 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.843976 | orchestrator | 2026-01-07 00:58:41.843981 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-07 00:58:41.843986 | orchestrator | Wednesday 07 January 2026 00:53:08 +0000 (0:00:00.988) 0:05:30.676 ***** 2026-01-07 00:58:41.843990 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.843995 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.843999 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.844004 | orchestrator | 2026-01-07 00:58:41.844008 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-07 00:58:41.844013 | orchestrator | Wednesday 07 January 2026 00:53:09 +0000 (0:00:00.346) 0:05:31.022 ***** 2026-01-07 00:58:41.844018 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.844022 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.844027 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.844032 | orchestrator | 2026-01-07 00:58:41.844036 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-07 00:58:41.844041 | orchestrator | Wednesday 07 January 2026 00:53:09 +0000 (0:00:00.310) 0:05:31.332 ***** 2026-01-07 00:58:41.844045 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.844050 | orchestrator | 2026-01-07 00:58:41.844054 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-07 00:58:41.844059 | orchestrator | Wednesday 07 January 2026 00:53:10 +0000 (0:00:00.742) 0:05:32.075 ***** 2026-01-07 00:58:41.844064 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.844068 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.844073 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.844077 | orchestrator | 2026-01-07 00:58:41.844082 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-07 00:58:41.844086 | orchestrator | Wednesday 07 January 2026 00:53:10 +0000 (0:00:00.361) 0:05:32.437 ***** 2026-01-07 00:58:41.844091 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.844095 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.844100 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.844105 | orchestrator | 2026-01-07 00:58:41.844112 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-07 00:58:41.844117 | orchestrator | Wednesday 07 January 2026 00:53:10 +0000 (0:00:00.327) 0:05:32.765 ***** 2026-01-07 00:58:41.844122 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.844126 | orchestrator | 2026-01-07 00:58:41.844131 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-07 00:58:41.844136 | orchestrator | Wednesday 07 January 2026 00:53:11 +0000 (0:00:00.776) 0:05:33.541 ***** 2026-01-07 00:58:41.844140 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.844145 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.844150 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.844154 | orchestrator | 2026-01-07 00:58:41.844159 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-07 00:58:41.844163 | orchestrator | Wednesday 07 January 2026 00:53:12 +0000 (0:00:01.196) 0:05:34.737 ***** 2026-01-07 00:58:41.844174 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.844178 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.844183 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.844188 | orchestrator | 2026-01-07 00:58:41.844193 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-07 00:58:41.844197 | orchestrator | Wednesday 07 January 2026 00:53:14 +0000 (0:00:01.152) 0:05:35.889 ***** 2026-01-07 00:58:41.844202 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.844206 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.844211 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.844215 | orchestrator | 2026-01-07 00:58:41.844220 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-07 00:58:41.844225 | orchestrator | Wednesday 07 January 2026 00:53:16 +0000 (0:00:01.984) 0:05:37.874 ***** 2026-01-07 00:58:41.844230 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.844234 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.844239 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.844243 | orchestrator | 2026-01-07 00:58:41.844248 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-07 00:58:41.844253 | orchestrator | Wednesday 07 January 2026 00:53:18 +0000 (0:00:02.231) 0:05:40.106 ***** 2026-01-07 00:58:41.844257 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.844262 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.844267 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-07 00:58:41.844271 | orchestrator | 2026-01-07 00:58:41.844276 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-07 00:58:41.844280 | orchestrator | Wednesday 07 January 2026 00:53:18 +0000 (0:00:00.682) 0:05:40.788 ***** 2026-01-07 00:58:41.844288 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-07 00:58:41.844293 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-07 00:58:41.844298 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-07 00:58:41.844303 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-07 00:58:41.844307 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-07 00:58:41.844312 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-01-07 00:58:41.844317 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:58:41.844321 | orchestrator | 2026-01-07 00:58:41.844326 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-07 00:58:41.844331 | orchestrator | Wednesday 07 January 2026 00:53:55 +0000 (0:00:36.142) 0:06:16.930 ***** 2026-01-07 00:58:41.844335 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:58:41.844340 | orchestrator | 2026-01-07 00:58:41.844344 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-07 00:58:41.844349 | orchestrator | Wednesday 07 January 2026 00:53:56 +0000 (0:00:01.358) 0:06:18.289 ***** 2026-01-07 00:58:41.844353 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.844358 | orchestrator | 2026-01-07 00:58:41.844363 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-07 00:58:41.844367 | orchestrator | Wednesday 07 January 2026 00:53:56 +0000 (0:00:00.309) 0:06:18.598 ***** 2026-01-07 00:58:41.844372 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.844376 | orchestrator | 2026-01-07 00:58:41.844381 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-07 00:58:41.844385 | orchestrator | Wednesday 07 January 2026 00:53:56 +0000 (0:00:00.160) 0:06:18.758 ***** 2026-01-07 00:58:41.844390 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-07 00:58:41.844398 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-07 00:58:41.844403 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-07 00:58:41.844408 | orchestrator | 2026-01-07 00:58:41.844412 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-07 00:58:41.844417 | orchestrator | Wednesday 07 January 2026 00:54:03 +0000 (0:00:06.459) 0:06:25.218 ***** 2026-01-07 00:58:41.844421 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-07 00:58:41.844426 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-07 00:58:41.844431 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-07 00:58:41.844435 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-07 00:58:41.844440 | orchestrator | 2026-01-07 00:58:41.844447 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:58:41.844452 | orchestrator | Wednesday 07 January 2026 00:54:08 +0000 (0:00:05.327) 0:06:30.545 ***** 2026-01-07 00:58:41.844457 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.844461 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.844466 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.844471 | orchestrator | 2026-01-07 00:58:41.844475 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-07 00:58:41.844496 | orchestrator | Wednesday 07 January 2026 00:54:09 +0000 (0:00:00.732) 0:06:31.278 ***** 2026-01-07 00:58:41.844501 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.844506 | orchestrator | 2026-01-07 00:58:41.844510 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-07 00:58:41.844515 | orchestrator | Wednesday 07 January 2026 00:54:09 +0000 (0:00:00.508) 0:06:31.786 ***** 2026-01-07 00:58:41.844520 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.844524 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.844529 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.844533 | orchestrator | 2026-01-07 00:58:41.844538 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-07 00:58:41.844542 | orchestrator | Wednesday 07 January 2026 00:54:10 +0000 (0:00:00.575) 0:06:32.362 ***** 2026-01-07 00:58:41.844547 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.844552 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.844556 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.844561 | orchestrator | 2026-01-07 00:58:41.844565 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-07 00:58:41.844570 | orchestrator | Wednesday 07 January 2026 00:54:11 +0000 (0:00:01.274) 0:06:33.637 ***** 2026-01-07 00:58:41.844575 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-07 00:58:41.844579 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-07 00:58:41.844584 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-07 00:58:41.844588 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.844593 | orchestrator | 2026-01-07 00:58:41.844597 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-07 00:58:41.844602 | orchestrator | Wednesday 07 January 2026 00:54:12 +0000 (0:00:00.594) 0:06:34.232 ***** 2026-01-07 00:58:41.844607 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.844612 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.844616 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.844621 | orchestrator | 2026-01-07 00:58:41.844625 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-07 00:58:41.844630 | orchestrator | 2026-01-07 00:58:41.844634 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:58:41.844643 | orchestrator | Wednesday 07 January 2026 00:54:13 +0000 (0:00:00.820) 0:06:35.052 ***** 2026-01-07 00:58:41.844651 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.844656 | orchestrator | 2026-01-07 00:58:41.844661 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:58:41.844666 | orchestrator | Wednesday 07 January 2026 00:54:13 +0000 (0:00:00.509) 0:06:35.562 ***** 2026-01-07 00:58:41.844670 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.844675 | orchestrator | 2026-01-07 00:58:41.844680 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:58:41.844684 | orchestrator | Wednesday 07 January 2026 00:54:14 +0000 (0:00:00.735) 0:06:36.297 ***** 2026-01-07 00:58:41.844689 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.844693 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.844698 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.844702 | orchestrator | 2026-01-07 00:58:41.844707 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:58:41.844712 | orchestrator | Wednesday 07 January 2026 00:54:14 +0000 (0:00:00.311) 0:06:36.609 ***** 2026-01-07 00:58:41.844717 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.844721 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.844726 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.844730 | orchestrator | 2026-01-07 00:58:41.844735 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:58:41.844740 | orchestrator | Wednesday 07 January 2026 00:54:15 +0000 (0:00:00.711) 0:06:37.320 ***** 2026-01-07 00:58:41.844744 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.844749 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.844754 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.844758 | orchestrator | 2026-01-07 00:58:41.844763 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:58:41.844767 | orchestrator | Wednesday 07 January 2026 00:54:16 +0000 (0:00:00.708) 0:06:38.029 ***** 2026-01-07 00:58:41.844772 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.844777 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.844781 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.844786 | orchestrator | 2026-01-07 00:58:41.844790 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:58:41.844795 | orchestrator | Wednesday 07 January 2026 00:54:17 +0000 (0:00:00.987) 0:06:39.017 ***** 2026-01-07 00:58:41.844800 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.844804 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.844809 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.844814 | orchestrator | 2026-01-07 00:58:41.844818 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:58:41.844825 | orchestrator | Wednesday 07 January 2026 00:54:17 +0000 (0:00:00.302) 0:06:39.319 ***** 2026-01-07 00:58:41.844834 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.844841 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.844849 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.844857 | orchestrator | 2026-01-07 00:58:41.844864 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:58:41.844883 | orchestrator | Wednesday 07 January 2026 00:54:17 +0000 (0:00:00.309) 0:06:39.629 ***** 2026-01-07 00:58:41.844891 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.844899 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.844907 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.844914 | orchestrator | 2026-01-07 00:58:41.844921 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:58:41.844929 | orchestrator | Wednesday 07 January 2026 00:54:18 +0000 (0:00:00.327) 0:06:39.956 ***** 2026-01-07 00:58:41.844937 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.844944 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.844952 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.844966 | orchestrator | 2026-01-07 00:58:41.844972 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:58:41.844979 | orchestrator | Wednesday 07 January 2026 00:54:19 +0000 (0:00:01.160) 0:06:41.117 ***** 2026-01-07 00:58:41.844987 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.844996 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.845003 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.845010 | orchestrator | 2026-01-07 00:58:41.845017 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:58:41.845024 | orchestrator | Wednesday 07 January 2026 00:54:20 +0000 (0:00:00.798) 0:06:41.915 ***** 2026-01-07 00:58:41.845031 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.845038 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.845045 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.845053 | orchestrator | 2026-01-07 00:58:41.845060 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:58:41.845068 | orchestrator | Wednesday 07 January 2026 00:54:20 +0000 (0:00:00.400) 0:06:42.315 ***** 2026-01-07 00:58:41.845075 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.845084 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.845092 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.845100 | orchestrator | 2026-01-07 00:58:41.845108 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:58:41.845122 | orchestrator | Wednesday 07 January 2026 00:54:20 +0000 (0:00:00.346) 0:06:42.662 ***** 2026-01-07 00:58:41.845131 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.845139 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.845147 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.845155 | orchestrator | 2026-01-07 00:58:41.845163 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:58:41.845171 | orchestrator | Wednesday 07 January 2026 00:54:21 +0000 (0:00:00.670) 0:06:43.332 ***** 2026-01-07 00:58:41.845179 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.845187 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.845194 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.845202 | orchestrator | 2026-01-07 00:58:41.845210 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:58:41.845227 | orchestrator | Wednesday 07 January 2026 00:54:21 +0000 (0:00:00.319) 0:06:43.651 ***** 2026-01-07 00:58:41.845236 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.845244 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.845253 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.845261 | orchestrator | 2026-01-07 00:58:41.845269 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:58:41.845277 | orchestrator | Wednesday 07 January 2026 00:54:22 +0000 (0:00:00.288) 0:06:43.940 ***** 2026-01-07 00:58:41.845284 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.845292 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.845299 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.845307 | orchestrator | 2026-01-07 00:58:41.845315 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:58:41.845322 | orchestrator | Wednesday 07 January 2026 00:54:22 +0000 (0:00:00.300) 0:06:44.240 ***** 2026-01-07 00:58:41.845330 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.845338 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.845346 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.845354 | orchestrator | 2026-01-07 00:58:41.845362 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:58:41.845369 | orchestrator | Wednesday 07 January 2026 00:54:22 +0000 (0:00:00.279) 0:06:44.520 ***** 2026-01-07 00:58:41.845377 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.845384 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.845391 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.845398 | orchestrator | 2026-01-07 00:58:41.845405 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:58:41.845422 | orchestrator | Wednesday 07 January 2026 00:54:23 +0000 (0:00:00.492) 0:06:45.013 ***** 2026-01-07 00:58:41.845430 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.845438 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.845446 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.845453 | orchestrator | 2026-01-07 00:58:41.845462 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:58:41.845469 | orchestrator | Wednesday 07 January 2026 00:54:23 +0000 (0:00:00.296) 0:06:45.309 ***** 2026-01-07 00:58:41.845477 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.845503 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.845510 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.845517 | orchestrator | 2026-01-07 00:58:41.845524 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-07 00:58:41.845531 | orchestrator | Wednesday 07 January 2026 00:54:23 +0000 (0:00:00.473) 0:06:45.783 ***** 2026-01-07 00:58:41.845537 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.845544 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.845551 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.845558 | orchestrator | 2026-01-07 00:58:41.845564 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-07 00:58:41.845572 | orchestrator | Wednesday 07 January 2026 00:54:24 +0000 (0:00:00.520) 0:06:46.304 ***** 2026-01-07 00:58:41.845580 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 00:58:41.845588 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 00:58:41.845596 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 00:58:41.845603 | orchestrator | 2026-01-07 00:58:41.845617 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-07 00:58:41.845623 | orchestrator | Wednesday 07 January 2026 00:54:25 +0000 (0:00:00.606) 0:06:46.911 ***** 2026-01-07 00:58:41.845627 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.845632 | orchestrator | 2026-01-07 00:58:41.845636 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-07 00:58:41.845641 | orchestrator | Wednesday 07 January 2026 00:54:25 +0000 (0:00:00.457) 0:06:47.368 ***** 2026-01-07 00:58:41.845645 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.845650 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.845654 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.845659 | orchestrator | 2026-01-07 00:58:41.845663 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-07 00:58:41.845669 | orchestrator | Wednesday 07 January 2026 00:54:26 +0000 (0:00:00.444) 0:06:47.812 ***** 2026-01-07 00:58:41.845673 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.845678 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.845682 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.845687 | orchestrator | 2026-01-07 00:58:41.845691 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-07 00:58:41.845696 | orchestrator | Wednesday 07 January 2026 00:54:26 +0000 (0:00:00.284) 0:06:48.097 ***** 2026-01-07 00:58:41.845700 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.845705 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.845710 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.845714 | orchestrator | 2026-01-07 00:58:41.845719 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-07 00:58:41.845723 | orchestrator | Wednesday 07 January 2026 00:54:26 +0000 (0:00:00.567) 0:06:48.664 ***** 2026-01-07 00:58:41.845728 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.845732 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.845736 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.845741 | orchestrator | 2026-01-07 00:58:41.845745 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-07 00:58:41.845755 | orchestrator | Wednesday 07 January 2026 00:54:27 +0000 (0:00:00.286) 0:06:48.950 ***** 2026-01-07 00:58:41.845759 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-07 00:58:41.845764 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-07 00:58:41.845769 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-07 00:58:41.845781 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-07 00:58:41.845786 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-07 00:58:41.845790 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-07 00:58:41.845795 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-07 00:58:41.845800 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-07 00:58:41.845804 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-07 00:58:41.845809 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-07 00:58:41.845813 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-07 00:58:41.845818 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-07 00:58:41.845822 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-07 00:58:41.845827 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-07 00:58:41.845831 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-07 00:58:41.845836 | orchestrator | 2026-01-07 00:58:41.845840 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-07 00:58:41.845845 | orchestrator | Wednesday 07 January 2026 00:54:30 +0000 (0:00:03.768) 0:06:52.719 ***** 2026-01-07 00:58:41.845849 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.845854 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.845859 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.845863 | orchestrator | 2026-01-07 00:58:41.845868 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-07 00:58:41.845872 | orchestrator | Wednesday 07 January 2026 00:54:31 +0000 (0:00:00.302) 0:06:53.022 ***** 2026-01-07 00:58:41.845877 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.845881 | orchestrator | 2026-01-07 00:58:41.845888 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-07 00:58:41.845894 | orchestrator | Wednesday 07 January 2026 00:54:31 +0000 (0:00:00.494) 0:06:53.516 ***** 2026-01-07 00:58:41.845902 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-07 00:58:41.845908 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-07 00:58:41.845913 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-07 00:58:41.845917 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-07 00:58:41.845922 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-07 00:58:41.845927 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-07 00:58:41.845931 | orchestrator | 2026-01-07 00:58:41.845939 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-07 00:58:41.845944 | orchestrator | Wednesday 07 January 2026 00:54:33 +0000 (0:00:01.447) 0:06:54.964 ***** 2026-01-07 00:58:41.845948 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:58:41.845953 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:58:41.845961 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:58:41.845966 | orchestrator | 2026-01-07 00:58:41.845971 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-07 00:58:41.845975 | orchestrator | Wednesday 07 January 2026 00:54:35 +0000 (0:00:02.120) 0:06:57.085 ***** 2026-01-07 00:58:41.845980 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 00:58:41.845984 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-07 00:58:41.845989 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.845993 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 00:58:41.845998 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:58:41.846003 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.846007 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 00:58:41.846106 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-07 00:58:41.846114 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.846118 | orchestrator | 2026-01-07 00:58:41.846123 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-07 00:58:41.846127 | orchestrator | Wednesday 07 January 2026 00:54:36 +0000 (0:00:01.208) 0:06:58.294 ***** 2026-01-07 00:58:41.846132 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:58:41.846137 | orchestrator | 2026-01-07 00:58:41.846141 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-07 00:58:41.846145 | orchestrator | Wednesday 07 January 2026 00:54:38 +0000 (0:00:01.902) 0:07:00.197 ***** 2026-01-07 00:58:41.846150 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.846155 | orchestrator | 2026-01-07 00:58:41.846159 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-07 00:58:41.846164 | orchestrator | Wednesday 07 January 2026 00:54:38 +0000 (0:00:00.546) 0:07:00.744 ***** 2026-01-07 00:58:41.846169 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-29ea93ed-0a9a-5585-8fd4-59056229f60b', 'data_vg': 'ceph-29ea93ed-0a9a-5585-8fd4-59056229f60b'}) 2026-01-07 00:58:41.846174 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dee3f89e-6ecc-57ac-a128-7ff5a8885640', 'data_vg': 'ceph-dee3f89e-6ecc-57ac-a128-7ff5a8885640'}) 2026-01-07 00:58:41.846183 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0b3967c5-6312-5066-b0c3-d93b1266106e', 'data_vg': 'ceph-0b3967c5-6312-5066-b0c3-d93b1266106e'}) 2026-01-07 00:58:41.846188 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1', 'data_vg': 'ceph-f1de19d5-0a66-5bfe-890b-5e52c2bc57c1'}) 2026-01-07 00:58:41.846193 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6ed406c7-6b31-5121-9e07-a95f5a11b8c1', 'data_vg': 'ceph-6ed406c7-6b31-5121-9e07-a95f5a11b8c1'}) 2026-01-07 00:58:41.846197 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c1079410-ca98-5ed2-be64-415d52b0d3f8', 'data_vg': 'ceph-c1079410-ca98-5ed2-be64-415d52b0d3f8'}) 2026-01-07 00:58:41.846202 | orchestrator | 2026-01-07 00:58:41.846206 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-07 00:58:41.846211 | orchestrator | Wednesday 07 January 2026 00:55:18 +0000 (0:00:39.532) 0:07:40.276 ***** 2026-01-07 00:58:41.846215 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.846220 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.846224 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.846229 | orchestrator | 2026-01-07 00:58:41.846233 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-07 00:58:41.846239 | orchestrator | Wednesday 07 January 2026 00:55:18 +0000 (0:00:00.307) 0:07:40.583 ***** 2026-01-07 00:58:41.846247 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.846255 | orchestrator | 2026-01-07 00:58:41.846262 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-07 00:58:41.846276 | orchestrator | Wednesday 07 January 2026 00:55:19 +0000 (0:00:00.508) 0:07:41.092 ***** 2026-01-07 00:58:41.846283 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.846291 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.846298 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.846306 | orchestrator | 2026-01-07 00:58:41.846313 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-07 00:58:41.846320 | orchestrator | Wednesday 07 January 2026 00:55:20 +0000 (0:00:01.053) 0:07:42.146 ***** 2026-01-07 00:58:41.846328 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.846336 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.846344 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.846352 | orchestrator | 2026-01-07 00:58:41.846360 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-07 00:58:41.846368 | orchestrator | Wednesday 07 January 2026 00:55:23 +0000 (0:00:02.657) 0:07:44.803 ***** 2026-01-07 00:58:41.846377 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.846385 | orchestrator | 2026-01-07 00:58:41.846393 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-07 00:58:41.846401 | orchestrator | Wednesday 07 January 2026 00:55:23 +0000 (0:00:00.500) 0:07:45.303 ***** 2026-01-07 00:58:41.846409 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.846422 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.846429 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.846437 | orchestrator | 2026-01-07 00:58:41.846445 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-07 00:58:41.846453 | orchestrator | Wednesday 07 January 2026 00:55:25 +0000 (0:00:01.575) 0:07:46.879 ***** 2026-01-07 00:58:41.846461 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.846469 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.846477 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.846502 | orchestrator | 2026-01-07 00:58:41.846510 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-07 00:58:41.846517 | orchestrator | Wednesday 07 January 2026 00:55:26 +0000 (0:00:01.207) 0:07:48.086 ***** 2026-01-07 00:58:41.846524 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.846532 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.846540 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.846547 | orchestrator | 2026-01-07 00:58:41.846556 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-07 00:58:41.846561 | orchestrator | Wednesday 07 January 2026 00:55:28 +0000 (0:00:01.871) 0:07:49.958 ***** 2026-01-07 00:58:41.846566 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.846570 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.846575 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.846580 | orchestrator | 2026-01-07 00:58:41.846584 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-07 00:58:41.846589 | orchestrator | Wednesday 07 January 2026 00:55:28 +0000 (0:00:00.320) 0:07:50.278 ***** 2026-01-07 00:58:41.846593 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.846598 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.846602 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.846607 | orchestrator | 2026-01-07 00:58:41.846612 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-07 00:58:41.846616 | orchestrator | Wednesday 07 January 2026 00:55:29 +0000 (0:00:00.639) 0:07:50.917 ***** 2026-01-07 00:58:41.846621 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-07 00:58:41.846625 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-01-07 00:58:41.846630 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-01-07 00:58:41.846634 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-01-07 00:58:41.846639 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-01-07 00:58:41.846643 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-01-07 00:58:41.846653 | orchestrator | 2026-01-07 00:58:41.846658 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-07 00:58:41.846663 | orchestrator | Wednesday 07 January 2026 00:55:30 +0000 (0:00:01.066) 0:07:51.984 ***** 2026-01-07 00:58:41.846668 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-07 00:58:41.846672 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-07 00:58:41.846683 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-07 00:58:41.846688 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-07 00:58:41.846692 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-07 00:58:41.846696 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-07 00:58:41.846701 | orchestrator | 2026-01-07 00:58:41.846705 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-07 00:58:41.846710 | orchestrator | Wednesday 07 January 2026 00:55:32 +0000 (0:00:02.203) 0:07:54.188 ***** 2026-01-07 00:58:41.846715 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-07 00:58:41.846722 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-07 00:58:41.846729 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-07 00:58:41.846738 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-07 00:58:41.846749 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-07 00:58:41.846757 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-07 00:58:41.846765 | orchestrator | 2026-01-07 00:58:41.846772 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-07 00:58:41.846779 | orchestrator | Wednesday 07 January 2026 00:55:37 +0000 (0:00:04.916) 0:07:59.104 ***** 2026-01-07 00:58:41.846786 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.846793 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.846800 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:58:41.846807 | orchestrator | 2026-01-07 00:58:41.846813 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-07 00:58:41.846820 | orchestrator | Wednesday 07 January 2026 00:55:40 +0000 (0:00:03.638) 0:08:02.743 ***** 2026-01-07 00:58:41.846828 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.846835 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.846843 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-07 00:58:41.846850 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:58:41.846857 | orchestrator | 2026-01-07 00:58:41.846865 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-07 00:58:41.846872 | orchestrator | Wednesday 07 January 2026 00:55:53 +0000 (0:00:12.458) 0:08:15.201 ***** 2026-01-07 00:58:41.846879 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.846887 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.846893 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.846901 | orchestrator | 2026-01-07 00:58:41.846908 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:58:41.846916 | orchestrator | Wednesday 07 January 2026 00:55:54 +0000 (0:00:01.080) 0:08:16.282 ***** 2026-01-07 00:58:41.846924 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.846932 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.846939 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.846946 | orchestrator | 2026-01-07 00:58:41.846954 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-07 00:58:41.846960 | orchestrator | Wednesday 07 January 2026 00:55:54 +0000 (0:00:00.338) 0:08:16.621 ***** 2026-01-07 00:58:41.846965 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.846969 | orchestrator | 2026-01-07 00:58:41.846978 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-07 00:58:41.846982 | orchestrator | Wednesday 07 January 2026 00:55:55 +0000 (0:00:00.580) 0:08:17.202 ***** 2026-01-07 00:58:41.846998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:58:41.847003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:58:41.847007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:58:41.847012 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.847016 | orchestrator | 2026-01-07 00:58:41.847021 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-07 00:58:41.847026 | orchestrator | Wednesday 07 January 2026 00:55:56 +0000 (0:00:00.976) 0:08:18.178 ***** 2026-01-07 00:58:41.847030 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.847035 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.847042 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.847049 | orchestrator | 2026-01-07 00:58:41.847059 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-07 00:58:41.847068 | orchestrator | Wednesday 07 January 2026 00:55:56 +0000 (0:00:00.340) 0:08:18.519 ***** 2026-01-07 00:58:41.847076 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.847083 | orchestrator | 2026-01-07 00:58:41.847090 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-07 00:58:41.847097 | orchestrator | Wednesday 07 January 2026 00:55:56 +0000 (0:00:00.227) 0:08:18.747 ***** 2026-01-07 00:58:41.847104 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.847112 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.847119 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.847127 | orchestrator | 2026-01-07 00:58:41.847134 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-07 00:58:41.847142 | orchestrator | Wednesday 07 January 2026 00:55:57 +0000 (0:00:00.357) 0:08:19.105 ***** 2026-01-07 00:58:41.847150 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.847157 | orchestrator | 2026-01-07 00:58:41.847164 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-07 00:58:41.847172 | orchestrator | Wednesday 07 January 2026 00:55:57 +0000 (0:00:00.234) 0:08:19.339 ***** 2026-01-07 00:58:41.847179 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.847186 | orchestrator | 2026-01-07 00:58:41.847194 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-07 00:58:41.847201 | orchestrator | Wednesday 07 January 2026 00:55:57 +0000 (0:00:00.215) 0:08:19.555 ***** 2026-01-07 00:58:41.847209 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.847216 | orchestrator | 2026-01-07 00:58:41.847224 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-07 00:58:41.847232 | orchestrator | Wednesday 07 January 2026 00:55:57 +0000 (0:00:00.121) 0:08:19.676 ***** 2026-01-07 00:58:41.847247 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.847255 | orchestrator | 2026-01-07 00:58:41.847262 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-07 00:58:41.847269 | orchestrator | Wednesday 07 January 2026 00:55:58 +0000 (0:00:00.215) 0:08:19.892 ***** 2026-01-07 00:58:41.847277 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.847285 | orchestrator | 2026-01-07 00:58:41.847292 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-07 00:58:41.847299 | orchestrator | Wednesday 07 January 2026 00:55:58 +0000 (0:00:00.784) 0:08:20.676 ***** 2026-01-07 00:58:41.847307 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:58:41.847315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:58:41.847322 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:58:41.847329 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.847337 | orchestrator | 2026-01-07 00:58:41.847345 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-07 00:58:41.847353 | orchestrator | Wednesday 07 January 2026 00:55:59 +0000 (0:00:00.425) 0:08:21.101 ***** 2026-01-07 00:58:41.847360 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.847376 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.847383 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.847391 | orchestrator | 2026-01-07 00:58:41.847398 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-07 00:58:41.847406 | orchestrator | Wednesday 07 January 2026 00:55:59 +0000 (0:00:00.344) 0:08:21.446 ***** 2026-01-07 00:58:41.847413 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.847420 | orchestrator | 2026-01-07 00:58:41.847428 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-07 00:58:41.847436 | orchestrator | Wednesday 07 January 2026 00:55:59 +0000 (0:00:00.249) 0:08:21.695 ***** 2026-01-07 00:58:41.847443 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.847451 | orchestrator | 2026-01-07 00:58:41.847458 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-07 00:58:41.847466 | orchestrator | 2026-01-07 00:58:41.847473 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:58:41.847497 | orchestrator | Wednesday 07 January 2026 00:56:00 +0000 (0:00:00.644) 0:08:22.340 ***** 2026-01-07 00:58:41.847513 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.847523 | orchestrator | 2026-01-07 00:58:41.847530 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:58:41.847539 | orchestrator | Wednesday 07 January 2026 00:56:01 +0000 (0:00:01.235) 0:08:23.575 ***** 2026-01-07 00:58:41.847547 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.847555 | orchestrator | 2026-01-07 00:58:41.847562 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:58:41.847574 | orchestrator | Wednesday 07 January 2026 00:56:03 +0000 (0:00:01.310) 0:08:24.886 ***** 2026-01-07 00:58:41.847582 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.847590 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.847598 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.847605 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.847612 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.847620 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.847628 | orchestrator | 2026-01-07 00:58:41.847636 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:58:41.847643 | orchestrator | Wednesday 07 January 2026 00:56:04 +0000 (0:00:01.397) 0:08:26.283 ***** 2026-01-07 00:58:41.847651 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.847659 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.847666 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.847674 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.847681 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.847688 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.847696 | orchestrator | 2026-01-07 00:58:41.847703 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:58:41.847711 | orchestrator | Wednesday 07 January 2026 00:56:05 +0000 (0:00:00.750) 0:08:27.034 ***** 2026-01-07 00:58:41.847719 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.847727 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.847734 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.847741 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.847749 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.847756 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.847764 | orchestrator | 2026-01-07 00:58:41.847771 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:58:41.847779 | orchestrator | Wednesday 07 January 2026 00:56:06 +0000 (0:00:01.061) 0:08:28.095 ***** 2026-01-07 00:58:41.847786 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.847794 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.847808 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.847816 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.847823 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.847830 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.847838 | orchestrator | 2026-01-07 00:58:41.847846 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:58:41.847853 | orchestrator | Wednesday 07 January 2026 00:56:06 +0000 (0:00:00.704) 0:08:28.800 ***** 2026-01-07 00:58:41.847861 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.847868 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.847876 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.847883 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.847891 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.847899 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.847906 | orchestrator | 2026-01-07 00:58:41.847914 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:58:41.847927 | orchestrator | Wednesday 07 January 2026 00:56:08 +0000 (0:00:01.325) 0:08:30.125 ***** 2026-01-07 00:58:41.847935 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.847943 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.847950 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.847958 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.847965 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.847972 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.847980 | orchestrator | 2026-01-07 00:58:41.847987 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:58:41.847995 | orchestrator | Wednesday 07 January 2026 00:56:08 +0000 (0:00:00.613) 0:08:30.738 ***** 2026-01-07 00:58:41.848003 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.848010 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.848018 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.848026 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.848033 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.848041 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.848048 | orchestrator | 2026-01-07 00:58:41.848055 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:58:41.848063 | orchestrator | Wednesday 07 January 2026 00:56:09 +0000 (0:00:00.885) 0:08:31.624 ***** 2026-01-07 00:58:41.848071 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.848079 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.848086 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.848094 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.848101 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.848109 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.848116 | orchestrator | 2026-01-07 00:58:41.848124 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:58:41.848131 | orchestrator | Wednesday 07 January 2026 00:56:10 +0000 (0:00:00.969) 0:08:32.594 ***** 2026-01-07 00:58:41.848138 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.848146 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.848152 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.848160 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.848168 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.848176 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.848183 | orchestrator | 2026-01-07 00:58:41.848191 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:58:41.848199 | orchestrator | Wednesday 07 January 2026 00:56:12 +0000 (0:00:01.332) 0:08:33.927 ***** 2026-01-07 00:58:41.848206 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.848214 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.848221 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.848229 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.848236 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.848243 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.848257 | orchestrator | 2026-01-07 00:58:41.848265 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:58:41.848273 | orchestrator | Wednesday 07 January 2026 00:56:12 +0000 (0:00:00.564) 0:08:34.491 ***** 2026-01-07 00:58:41.848280 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.848288 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.848295 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.848303 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.848310 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.848318 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.848325 | orchestrator | 2026-01-07 00:58:41.848332 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:58:41.848344 | orchestrator | Wednesday 07 January 2026 00:56:13 +0000 (0:00:00.817) 0:08:35.308 ***** 2026-01-07 00:58:41.848352 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.848360 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.848368 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.848375 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.848382 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.848390 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.848397 | orchestrator | 2026-01-07 00:58:41.848405 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:58:41.848412 | orchestrator | Wednesday 07 January 2026 00:56:14 +0000 (0:00:00.608) 0:08:35.916 ***** 2026-01-07 00:58:41.848419 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.848427 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.848435 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.848443 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.848451 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.848458 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.848465 | orchestrator | 2026-01-07 00:58:41.848473 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:58:41.848499 | orchestrator | Wednesday 07 January 2026 00:56:14 +0000 (0:00:00.817) 0:08:36.733 ***** 2026-01-07 00:58:41.848507 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.848514 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.848522 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.848530 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.848537 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.848545 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.848552 | orchestrator | 2026-01-07 00:58:41.848560 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:58:41.848568 | orchestrator | Wednesday 07 January 2026 00:56:15 +0000 (0:00:00.636) 0:08:37.370 ***** 2026-01-07 00:58:41.848575 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.848583 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.848590 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.848597 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.848605 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.848613 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.848620 | orchestrator | 2026-01-07 00:58:41.848628 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:58:41.848636 | orchestrator | Wednesday 07 January 2026 00:56:16 +0000 (0:00:00.910) 0:08:38.280 ***** 2026-01-07 00:58:41.848644 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.848651 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.848659 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.848666 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:58:41.848673 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:58:41.848680 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:58:41.848688 | orchestrator | 2026-01-07 00:58:41.848696 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:58:41.848708 | orchestrator | Wednesday 07 January 2026 00:56:17 +0000 (0:00:00.588) 0:08:38.869 ***** 2026-01-07 00:58:41.848721 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.848729 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.848737 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.848744 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.848751 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.848759 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.848767 | orchestrator | 2026-01-07 00:58:41.848774 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:58:41.848781 | orchestrator | Wednesday 07 January 2026 00:56:17 +0000 (0:00:00.865) 0:08:39.735 ***** 2026-01-07 00:58:41.848788 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.848795 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.848804 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.848811 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.848819 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.848826 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.848834 | orchestrator | 2026-01-07 00:58:41.848841 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:58:41.848849 | orchestrator | Wednesday 07 January 2026 00:56:18 +0000 (0:00:00.621) 0:08:40.356 ***** 2026-01-07 00:58:41.848856 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.848864 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.848871 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.848879 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.848886 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.848894 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.848902 | orchestrator | 2026-01-07 00:58:41.848909 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-07 00:58:41.848917 | orchestrator | Wednesday 07 January 2026 00:56:19 +0000 (0:00:01.302) 0:08:41.659 ***** 2026-01-07 00:58:41.848925 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:58:41.848932 | orchestrator | 2026-01-07 00:58:41.848940 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-07 00:58:41.848947 | orchestrator | Wednesday 07 January 2026 00:56:23 +0000 (0:00:03.879) 0:08:45.538 ***** 2026-01-07 00:58:41.848955 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:58:41.848962 | orchestrator | 2026-01-07 00:58:41.848969 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-07 00:58:41.848977 | orchestrator | Wednesday 07 January 2026 00:56:25 +0000 (0:00:02.037) 0:08:47.576 ***** 2026-01-07 00:58:41.848985 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.848993 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.849000 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.849008 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.849015 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.849023 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.849030 | orchestrator | 2026-01-07 00:58:41.849038 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-07 00:58:41.849045 | orchestrator | Wednesday 07 January 2026 00:56:27 +0000 (0:00:01.851) 0:08:49.427 ***** 2026-01-07 00:58:41.849053 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.849061 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.849068 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.849076 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.849083 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.849095 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.849103 | orchestrator | 2026-01-07 00:58:41.849110 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-07 00:58:41.849118 | orchestrator | Wednesday 07 January 2026 00:56:28 +0000 (0:00:01.065) 0:08:50.493 ***** 2026-01-07 00:58:41.849125 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.849133 | orchestrator | 2026-01-07 00:58:41.849140 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-07 00:58:41.849153 | orchestrator | Wednesday 07 January 2026 00:56:29 +0000 (0:00:01.265) 0:08:51.759 ***** 2026-01-07 00:58:41.849159 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.849167 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.849173 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.849179 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.849185 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.849193 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.849200 | orchestrator | 2026-01-07 00:58:41.849207 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-07 00:58:41.849214 | orchestrator | Wednesday 07 January 2026 00:56:32 +0000 (0:00:02.064) 0:08:53.823 ***** 2026-01-07 00:58:41.849220 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.849228 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.849235 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.849241 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.849248 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.849255 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.849262 | orchestrator | 2026-01-07 00:58:41.849269 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-07 00:58:41.849276 | orchestrator | Wednesday 07 January 2026 00:56:35 +0000 (0:00:03.538) 0:08:57.362 ***** 2026-01-07 00:58:41.849284 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:58:41.849292 | orchestrator | 2026-01-07 00:58:41.849299 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-07 00:58:41.849307 | orchestrator | Wednesday 07 January 2026 00:56:36 +0000 (0:00:01.301) 0:08:58.664 ***** 2026-01-07 00:58:41.849315 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.849323 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.849330 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.849338 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.849344 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.849352 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.849359 | orchestrator | 2026-01-07 00:58:41.849367 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-07 00:58:41.849380 | orchestrator | Wednesday 07 January 2026 00:56:37 +0000 (0:00:00.887) 0:08:59.552 ***** 2026-01-07 00:58:41.849387 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.849394 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.849402 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.849410 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:58:41.849418 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:58:41.849425 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:58:41.849433 | orchestrator | 2026-01-07 00:58:41.849440 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-07 00:58:41.849448 | orchestrator | Wednesday 07 January 2026 00:56:40 +0000 (0:00:02.520) 0:09:02.073 ***** 2026-01-07 00:58:41.849456 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.849463 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.849470 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.849494 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:58:41.849502 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:58:41.849510 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:58:41.849517 | orchestrator | 2026-01-07 00:58:41.849525 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-07 00:58:41.849533 | orchestrator | 2026-01-07 00:58:41.849540 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:58:41.849548 | orchestrator | Wednesday 07 January 2026 00:56:41 +0000 (0:00:01.218) 0:09:03.292 ***** 2026-01-07 00:58:41.849556 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.849571 | orchestrator | 2026-01-07 00:58:41.849580 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:58:41.849587 | orchestrator | Wednesday 07 January 2026 00:56:42 +0000 (0:00:00.589) 0:09:03.881 ***** 2026-01-07 00:58:41.849595 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.849602 | orchestrator | 2026-01-07 00:58:41.849610 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:58:41.849617 | orchestrator | Wednesday 07 January 2026 00:56:42 +0000 (0:00:00.823) 0:09:04.704 ***** 2026-01-07 00:58:41.849625 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.849632 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.849639 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.849647 | orchestrator | 2026-01-07 00:58:41.849654 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:58:41.849662 | orchestrator | Wednesday 07 January 2026 00:56:43 +0000 (0:00:00.318) 0:09:05.023 ***** 2026-01-07 00:58:41.849670 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.849678 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.849685 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.849693 | orchestrator | 2026-01-07 00:58:41.849700 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:58:41.849708 | orchestrator | Wednesday 07 January 2026 00:56:43 +0000 (0:00:00.715) 0:09:05.739 ***** 2026-01-07 00:58:41.849716 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.849723 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.849730 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.849738 | orchestrator | 2026-01-07 00:58:41.849746 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:58:41.849758 | orchestrator | Wednesday 07 January 2026 00:56:45 +0000 (0:00:01.185) 0:09:06.924 ***** 2026-01-07 00:58:41.849766 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.849773 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.849780 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.849788 | orchestrator | 2026-01-07 00:58:41.849795 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:58:41.849803 | orchestrator | Wednesday 07 January 2026 00:56:45 +0000 (0:00:00.823) 0:09:07.747 ***** 2026-01-07 00:58:41.849810 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.849817 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.849824 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.849832 | orchestrator | 2026-01-07 00:58:41.849839 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:58:41.849846 | orchestrator | Wednesday 07 January 2026 00:56:46 +0000 (0:00:00.329) 0:09:08.077 ***** 2026-01-07 00:58:41.849853 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.849859 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.849866 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.849872 | orchestrator | 2026-01-07 00:58:41.849879 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:58:41.849886 | orchestrator | Wednesday 07 January 2026 00:56:46 +0000 (0:00:00.329) 0:09:08.407 ***** 2026-01-07 00:58:41.849893 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.849901 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.849908 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.849915 | orchestrator | 2026-01-07 00:58:41.849922 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:58:41.849928 | orchestrator | Wednesday 07 January 2026 00:56:47 +0000 (0:00:00.648) 0:09:09.055 ***** 2026-01-07 00:58:41.849935 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.849943 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.849951 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.849958 | orchestrator | 2026-01-07 00:58:41.849966 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:58:41.849979 | orchestrator | Wednesday 07 January 2026 00:56:48 +0000 (0:00:00.874) 0:09:09.930 ***** 2026-01-07 00:58:41.849986 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.849994 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.850001 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.850009 | orchestrator | 2026-01-07 00:58:41.850054 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:58:41.850062 | orchestrator | Wednesday 07 January 2026 00:56:48 +0000 (0:00:00.744) 0:09:10.675 ***** 2026-01-07 00:58:41.850069 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.850077 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.850084 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.850091 | orchestrator | 2026-01-07 00:58:41.850099 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:58:41.850113 | orchestrator | Wednesday 07 January 2026 00:56:49 +0000 (0:00:00.262) 0:09:10.938 ***** 2026-01-07 00:58:41.850121 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.850128 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.850135 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.850143 | orchestrator | 2026-01-07 00:58:41.850151 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:58:41.850156 | orchestrator | Wednesday 07 January 2026 00:56:49 +0000 (0:00:00.471) 0:09:11.410 ***** 2026-01-07 00:58:41.850161 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.850165 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.850170 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.850174 | orchestrator | 2026-01-07 00:58:41.850179 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:58:41.850183 | orchestrator | Wednesday 07 January 2026 00:56:49 +0000 (0:00:00.302) 0:09:11.712 ***** 2026-01-07 00:58:41.850188 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.850192 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.850197 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.850202 | orchestrator | 2026-01-07 00:58:41.850206 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:58:41.850211 | orchestrator | Wednesday 07 January 2026 00:56:50 +0000 (0:00:00.388) 0:09:12.100 ***** 2026-01-07 00:58:41.850215 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.850220 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.850224 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.850229 | orchestrator | 2026-01-07 00:58:41.850233 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:58:41.850238 | orchestrator | Wednesday 07 January 2026 00:56:50 +0000 (0:00:00.429) 0:09:12.529 ***** 2026-01-07 00:58:41.850242 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.850247 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.850251 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.850256 | orchestrator | 2026-01-07 00:58:41.850260 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:58:41.850265 | orchestrator | Wednesday 07 January 2026 00:56:51 +0000 (0:00:00.546) 0:09:13.076 ***** 2026-01-07 00:58:41.850270 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.850274 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.850279 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.850283 | orchestrator | 2026-01-07 00:58:41.850288 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:58:41.850292 | orchestrator | Wednesday 07 January 2026 00:56:51 +0000 (0:00:00.397) 0:09:13.474 ***** 2026-01-07 00:58:41.850297 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.850301 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.850306 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.850310 | orchestrator | 2026-01-07 00:58:41.850315 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:58:41.850320 | orchestrator | Wednesday 07 January 2026 00:56:52 +0000 (0:00:00.401) 0:09:13.875 ***** 2026-01-07 00:58:41.850330 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.850335 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.850339 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.850344 | orchestrator | 2026-01-07 00:58:41.850349 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:58:41.850357 | orchestrator | Wednesday 07 January 2026 00:56:52 +0000 (0:00:00.298) 0:09:14.174 ***** 2026-01-07 00:58:41.850362 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.850366 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.850370 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.850375 | orchestrator | 2026-01-07 00:58:41.850380 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-07 00:58:41.850384 | orchestrator | Wednesday 07 January 2026 00:56:53 +0000 (0:00:00.674) 0:09:14.849 ***** 2026-01-07 00:58:41.850389 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.850393 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.850398 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-07 00:58:41.850403 | orchestrator | 2026-01-07 00:58:41.850407 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-07 00:58:41.850412 | orchestrator | Wednesday 07 January 2026 00:56:53 +0000 (0:00:00.354) 0:09:15.203 ***** 2026-01-07 00:58:41.850416 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:58:41.850421 | orchestrator | 2026-01-07 00:58:41.850425 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-07 00:58:41.850430 | orchestrator | Wednesday 07 January 2026 00:56:55 +0000 (0:00:02.112) 0:09:17.316 ***** 2026-01-07 00:58:41.850437 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-07 00:58:41.850444 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.850449 | orchestrator | 2026-01-07 00:58:41.850453 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-07 00:58:41.850458 | orchestrator | Wednesday 07 January 2026 00:56:55 +0000 (0:00:00.189) 0:09:17.505 ***** 2026-01-07 00:58:41.850464 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 00:58:41.850475 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 00:58:41.850494 | orchestrator | 2026-01-07 00:58:41.850502 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-07 00:58:41.850507 | orchestrator | Wednesday 07 January 2026 00:57:04 +0000 (0:00:09.000) 0:09:26.506 ***** 2026-01-07 00:58:41.850512 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 00:58:41.850516 | orchestrator | 2026-01-07 00:58:41.850521 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-07 00:58:41.850525 | orchestrator | Wednesday 07 January 2026 00:57:07 +0000 (0:00:03.275) 0:09:29.781 ***** 2026-01-07 00:58:41.850530 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.850534 | orchestrator | 2026-01-07 00:58:41.850539 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-07 00:58:41.850543 | orchestrator | Wednesday 07 January 2026 00:57:08 +0000 (0:00:00.722) 0:09:30.503 ***** 2026-01-07 00:58:41.850548 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-07 00:58:41.850552 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-07 00:58:41.850561 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-07 00:58:41.850565 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-07 00:58:41.850570 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-07 00:58:41.850575 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-07 00:58:41.850579 | orchestrator | 2026-01-07 00:58:41.850584 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-07 00:58:41.850588 | orchestrator | Wednesday 07 January 2026 00:57:09 +0000 (0:00:01.242) 0:09:31.746 ***** 2026-01-07 00:58:41.850593 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:58:41.850598 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:58:41.850602 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:58:41.850607 | orchestrator | 2026-01-07 00:58:41.850611 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-07 00:58:41.850616 | orchestrator | Wednesday 07 January 2026 00:57:12 +0000 (0:00:02.393) 0:09:34.140 ***** 2026-01-07 00:58:41.850621 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 00:58:41.850625 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:58:41.850630 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.850634 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 00:58:41.850639 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-07 00:58:41.850643 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.850648 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 00:58:41.850653 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-07 00:58:41.850657 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.850662 | orchestrator | 2026-01-07 00:58:41.850666 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-07 00:58:41.850676 | orchestrator | Wednesday 07 January 2026 00:57:13 +0000 (0:00:01.458) 0:09:35.598 ***** 2026-01-07 00:58:41.850681 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.850685 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.850690 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.850694 | orchestrator | 2026-01-07 00:58:41.850699 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-07 00:58:41.850703 | orchestrator | Wednesday 07 January 2026 00:57:16 +0000 (0:00:02.468) 0:09:38.066 ***** 2026-01-07 00:58:41.850708 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.850713 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.850717 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.850721 | orchestrator | 2026-01-07 00:58:41.850726 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-07 00:58:41.850730 | orchestrator | Wednesday 07 January 2026 00:57:16 +0000 (0:00:00.330) 0:09:38.397 ***** 2026-01-07 00:58:41.850735 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.850740 | orchestrator | 2026-01-07 00:58:41.850744 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-07 00:58:41.850749 | orchestrator | Wednesday 07 January 2026 00:57:17 +0000 (0:00:00.810) 0:09:39.207 ***** 2026-01-07 00:58:41.850753 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.850758 | orchestrator | 2026-01-07 00:58:41.850762 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-07 00:58:41.850767 | orchestrator | Wednesday 07 January 2026 00:57:17 +0000 (0:00:00.520) 0:09:39.728 ***** 2026-01-07 00:58:41.850771 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.850776 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.850780 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.850790 | orchestrator | 2026-01-07 00:58:41.850794 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-07 00:58:41.850799 | orchestrator | Wednesday 07 January 2026 00:57:19 +0000 (0:00:01.161) 0:09:40.889 ***** 2026-01-07 00:58:41.850803 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.850808 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.850812 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.850817 | orchestrator | 2026-01-07 00:58:41.850821 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-07 00:58:41.850826 | orchestrator | Wednesday 07 January 2026 00:57:20 +0000 (0:00:01.452) 0:09:42.341 ***** 2026-01-07 00:58:41.850830 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.850835 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.850839 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.850844 | orchestrator | 2026-01-07 00:58:41.850848 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-07 00:58:41.850856 | orchestrator | Wednesday 07 January 2026 00:57:22 +0000 (0:00:01.942) 0:09:44.284 ***** 2026-01-07 00:58:41.850860 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.850865 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.850869 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.850874 | orchestrator | 2026-01-07 00:58:41.850878 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-07 00:58:41.850883 | orchestrator | Wednesday 07 January 2026 00:57:24 +0000 (0:00:02.208) 0:09:46.492 ***** 2026-01-07 00:58:41.850887 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.850892 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.850896 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.850901 | orchestrator | 2026-01-07 00:58:41.850905 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:58:41.850910 | orchestrator | Wednesday 07 January 2026 00:57:26 +0000 (0:00:01.508) 0:09:48.000 ***** 2026-01-07 00:58:41.850915 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.850919 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.850924 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.850928 | orchestrator | 2026-01-07 00:58:41.850933 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-07 00:58:41.850937 | orchestrator | Wednesday 07 January 2026 00:57:26 +0000 (0:00:00.697) 0:09:48.698 ***** 2026-01-07 00:58:41.850942 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.850946 | orchestrator | 2026-01-07 00:58:41.850951 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-07 00:58:41.850955 | orchestrator | Wednesday 07 January 2026 00:57:27 +0000 (0:00:00.765) 0:09:49.463 ***** 2026-01-07 00:58:41.850960 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.850964 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.850969 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.850973 | orchestrator | 2026-01-07 00:58:41.850978 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-07 00:58:41.850983 | orchestrator | Wednesday 07 January 2026 00:57:27 +0000 (0:00:00.313) 0:09:49.777 ***** 2026-01-07 00:58:41.850987 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.850992 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.850996 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.851001 | orchestrator | 2026-01-07 00:58:41.851005 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-07 00:58:41.851010 | orchestrator | Wednesday 07 January 2026 00:57:29 +0000 (0:00:01.296) 0:09:51.073 ***** 2026-01-07 00:58:41.851014 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:58:41.851019 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:58:41.851023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:58:41.851028 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.851038 | orchestrator | 2026-01-07 00:58:41.851042 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-07 00:58:41.851047 | orchestrator | Wednesday 07 January 2026 00:57:30 +0000 (0:00:00.845) 0:09:51.919 ***** 2026-01-07 00:58:41.851051 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.851056 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.851060 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.851065 | orchestrator | 2026-01-07 00:58:41.851072 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-07 00:58:41.851077 | orchestrator | 2026-01-07 00:58:41.851081 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-07 00:58:41.851086 | orchestrator | Wednesday 07 January 2026 00:57:30 +0000 (0:00:00.789) 0:09:52.709 ***** 2026-01-07 00:58:41.851090 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.851095 | orchestrator | 2026-01-07 00:58:41.851100 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-07 00:58:41.851104 | orchestrator | Wednesday 07 January 2026 00:57:31 +0000 (0:00:00.516) 0:09:53.226 ***** 2026-01-07 00:58:41.851109 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.851113 | orchestrator | 2026-01-07 00:58:41.851118 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-07 00:58:41.851122 | orchestrator | Wednesday 07 January 2026 00:57:32 +0000 (0:00:00.723) 0:09:53.950 ***** 2026-01-07 00:58:41.851127 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.851131 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.851136 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.851140 | orchestrator | 2026-01-07 00:58:41.851145 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-07 00:58:41.851149 | orchestrator | Wednesday 07 January 2026 00:57:32 +0000 (0:00:00.312) 0:09:54.263 ***** 2026-01-07 00:58:41.851154 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.851158 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.851163 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.851167 | orchestrator | 2026-01-07 00:58:41.851172 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-07 00:58:41.851176 | orchestrator | Wednesday 07 January 2026 00:57:33 +0000 (0:00:00.818) 0:09:55.081 ***** 2026-01-07 00:58:41.851181 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.851185 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.851190 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.851194 | orchestrator | 2026-01-07 00:58:41.851199 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-07 00:58:41.851203 | orchestrator | Wednesday 07 January 2026 00:57:34 +0000 (0:00:00.766) 0:09:55.848 ***** 2026-01-07 00:58:41.851208 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.851212 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.851217 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.851221 | orchestrator | 2026-01-07 00:58:41.851226 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-07 00:58:41.851230 | orchestrator | Wednesday 07 January 2026 00:57:35 +0000 (0:00:01.109) 0:09:56.958 ***** 2026-01-07 00:58:41.851235 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.851243 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.851247 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.851252 | orchestrator | 2026-01-07 00:58:41.851256 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-07 00:58:41.851261 | orchestrator | Wednesday 07 January 2026 00:57:35 +0000 (0:00:00.340) 0:09:57.299 ***** 2026-01-07 00:58:41.851265 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.851270 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.851274 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.851279 | orchestrator | 2026-01-07 00:58:41.851287 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-07 00:58:41.851291 | orchestrator | Wednesday 07 January 2026 00:57:35 +0000 (0:00:00.305) 0:09:57.604 ***** 2026-01-07 00:58:41.851296 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.851301 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.851305 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.851310 | orchestrator | 2026-01-07 00:58:41.851314 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-07 00:58:41.851319 | orchestrator | Wednesday 07 January 2026 00:57:36 +0000 (0:00:00.302) 0:09:57.907 ***** 2026-01-07 00:58:41.851323 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.851328 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.851332 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.851337 | orchestrator | 2026-01-07 00:58:41.851341 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-07 00:58:41.851346 | orchestrator | Wednesday 07 January 2026 00:57:37 +0000 (0:00:01.118) 0:09:59.026 ***** 2026-01-07 00:58:41.851350 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.851355 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.851359 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.851364 | orchestrator | 2026-01-07 00:58:41.851368 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-07 00:58:41.851373 | orchestrator | Wednesday 07 January 2026 00:57:37 +0000 (0:00:00.765) 0:09:59.791 ***** 2026-01-07 00:58:41.851377 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.851382 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.851386 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.851391 | orchestrator | 2026-01-07 00:58:41.851395 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-07 00:58:41.851400 | orchestrator | Wednesday 07 January 2026 00:57:38 +0000 (0:00:00.283) 0:10:00.074 ***** 2026-01-07 00:58:41.851404 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.851409 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.851413 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.851418 | orchestrator | 2026-01-07 00:58:41.851422 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-07 00:58:41.851427 | orchestrator | Wednesday 07 January 2026 00:57:38 +0000 (0:00:00.297) 0:10:00.372 ***** 2026-01-07 00:58:41.851431 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.851436 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.851440 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.851445 | orchestrator | 2026-01-07 00:58:41.851449 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-07 00:58:41.851454 | orchestrator | Wednesday 07 January 2026 00:57:39 +0000 (0:00:00.653) 0:10:01.025 ***** 2026-01-07 00:58:41.851458 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.851463 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.851470 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.851475 | orchestrator | 2026-01-07 00:58:41.851494 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-07 00:58:41.851499 | orchestrator | Wednesday 07 January 2026 00:57:39 +0000 (0:00:00.352) 0:10:01.378 ***** 2026-01-07 00:58:41.851503 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.851508 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.851512 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.851516 | orchestrator | 2026-01-07 00:58:41.851521 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-07 00:58:41.851526 | orchestrator | Wednesday 07 January 2026 00:57:39 +0000 (0:00:00.334) 0:10:01.713 ***** 2026-01-07 00:58:41.851530 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.851535 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.851539 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.851544 | orchestrator | 2026-01-07 00:58:41.851548 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-07 00:58:41.851553 | orchestrator | Wednesday 07 January 2026 00:57:40 +0000 (0:00:00.296) 0:10:02.009 ***** 2026-01-07 00:58:41.851561 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.851566 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.851571 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.851575 | orchestrator | 2026-01-07 00:58:41.851580 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-07 00:58:41.851584 | orchestrator | Wednesday 07 January 2026 00:57:40 +0000 (0:00:00.586) 0:10:02.596 ***** 2026-01-07 00:58:41.851589 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.851593 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.851598 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.851602 | orchestrator | 2026-01-07 00:58:41.851606 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-07 00:58:41.851611 | orchestrator | Wednesday 07 January 2026 00:57:41 +0000 (0:00:00.275) 0:10:02.871 ***** 2026-01-07 00:58:41.851615 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.851620 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.851624 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.851629 | orchestrator | 2026-01-07 00:58:41.851633 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-07 00:58:41.851638 | orchestrator | Wednesday 07 January 2026 00:57:41 +0000 (0:00:00.276) 0:10:03.147 ***** 2026-01-07 00:58:41.851642 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.851647 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.851651 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.851656 | orchestrator | 2026-01-07 00:58:41.851660 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-07 00:58:41.851665 | orchestrator | Wednesday 07 January 2026 00:57:41 +0000 (0:00:00.620) 0:10:03.767 ***** 2026-01-07 00:58:41.851672 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.851677 | orchestrator | 2026-01-07 00:58:41.851681 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-07 00:58:41.851686 | orchestrator | Wednesday 07 January 2026 00:57:42 +0000 (0:00:00.475) 0:10:04.243 ***** 2026-01-07 00:58:41.851690 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:58:41.851695 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:58:41.851700 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:58:41.851704 | orchestrator | 2026-01-07 00:58:41.851709 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-07 00:58:41.851713 | orchestrator | Wednesday 07 January 2026 00:57:44 +0000 (0:00:02.366) 0:10:06.610 ***** 2026-01-07 00:58:41.851718 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 00:58:41.851722 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-07 00:58:41.851727 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.851732 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 00:58:41.851736 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-07 00:58:41.851741 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.851745 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 00:58:41.851750 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-07 00:58:41.851754 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.851759 | orchestrator | 2026-01-07 00:58:41.851763 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-07 00:58:41.851768 | orchestrator | Wednesday 07 January 2026 00:57:46 +0000 (0:00:01.392) 0:10:08.002 ***** 2026-01-07 00:58:41.851772 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.851777 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.851781 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.851786 | orchestrator | 2026-01-07 00:58:41.851790 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-07 00:58:41.851795 | orchestrator | Wednesday 07 January 2026 00:57:46 +0000 (0:00:00.285) 0:10:08.287 ***** 2026-01-07 00:58:41.851803 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.851807 | orchestrator | 2026-01-07 00:58:41.851812 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-07 00:58:41.851816 | orchestrator | Wednesday 07 January 2026 00:57:46 +0000 (0:00:00.464) 0:10:08.752 ***** 2026-01-07 00:58:41.851821 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-07 00:58:41.851826 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-07 00:58:41.851834 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-07 00:58:41.851838 | orchestrator | 2026-01-07 00:58:41.851843 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-07 00:58:41.851847 | orchestrator | Wednesday 07 January 2026 00:57:48 +0000 (0:00:01.207) 0:10:09.959 ***** 2026-01-07 00:58:41.851852 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:58:41.851856 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-07 00:58:41.851861 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:58:41.851865 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:58:41.851870 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-07 00:58:41.851875 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-07 00:58:41.851879 | orchestrator | 2026-01-07 00:58:41.851884 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-07 00:58:41.851888 | orchestrator | Wednesday 07 January 2026 00:57:53 +0000 (0:00:05.795) 0:10:15.755 ***** 2026-01-07 00:58:41.851893 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:58:41.851897 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:58:41.851902 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:58:41.851906 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:58:41.851911 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 00:58:41.851915 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 00:58:41.851920 | orchestrator | 2026-01-07 00:58:41.851924 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-07 00:58:41.851929 | orchestrator | Wednesday 07 January 2026 00:57:56 +0000 (0:00:02.334) 0:10:18.090 ***** 2026-01-07 00:58:41.851933 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 00:58:41.851938 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.851942 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 00:58:41.851947 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.851952 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 00:58:41.851956 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.851961 | orchestrator | 2026-01-07 00:58:41.851969 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-07 00:58:41.851973 | orchestrator | Wednesday 07 January 2026 00:57:57 +0000 (0:00:01.155) 0:10:19.246 ***** 2026-01-07 00:58:41.851978 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-07 00:58:41.851986 | orchestrator | 2026-01-07 00:58:41.851991 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-07 00:58:41.851995 | orchestrator | Wednesday 07 January 2026 00:57:57 +0000 (0:00:00.253) 0:10:19.499 ***** 2026-01-07 00:58:41.852000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:58:41.852004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:58:41.852009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:58:41.852014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:58:41.852018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:58:41.852023 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.852027 | orchestrator | 2026-01-07 00:58:41.852032 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-07 00:58:41.852036 | orchestrator | Wednesday 07 January 2026 00:57:58 +0000 (0:00:01.197) 0:10:20.697 ***** 2026-01-07 00:58:41.852041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:58:41.852045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:58:41.852050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:58:41.852054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:58:41.852059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-07 00:58:41.852063 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.852068 | orchestrator | 2026-01-07 00:58:41.852073 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-07 00:58:41.852077 | orchestrator | Wednesday 07 January 2026 00:57:59 +0000 (0:00:00.641) 0:10:21.338 ***** 2026-01-07 00:58:41.852082 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-07 00:58:41.852086 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-07 00:58:41.852095 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-07 00:58:41.852103 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-07 00:58:41.852200 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-07 00:58:41.852235 | orchestrator | 2026-01-07 00:58:41.852240 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-07 00:58:41.852246 | orchestrator | Wednesday 07 January 2026 00:58:28 +0000 (0:00:29.414) 0:10:50.753 ***** 2026-01-07 00:58:41.852251 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.852255 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.852260 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.852264 | orchestrator | 2026-01-07 00:58:41.852269 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-07 00:58:41.852278 | orchestrator | Wednesday 07 January 2026 00:58:29 +0000 (0:00:00.344) 0:10:51.098 ***** 2026-01-07 00:58:41.852283 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.852288 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.852292 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.852296 | orchestrator | 2026-01-07 00:58:41.852301 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-07 00:58:41.852305 | orchestrator | Wednesday 07 January 2026 00:58:29 +0000 (0:00:00.328) 0:10:51.426 ***** 2026-01-07 00:58:41.852310 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.852315 | orchestrator | 2026-01-07 00:58:41.852319 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-07 00:58:41.852324 | orchestrator | Wednesday 07 January 2026 00:58:30 +0000 (0:00:00.767) 0:10:52.194 ***** 2026-01-07 00:58:41.852335 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.852340 | orchestrator | 2026-01-07 00:58:41.852344 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-07 00:58:41.852349 | orchestrator | Wednesday 07 January 2026 00:58:30 +0000 (0:00:00.532) 0:10:52.726 ***** 2026-01-07 00:58:41.852353 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.852357 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.852362 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.852366 | orchestrator | 2026-01-07 00:58:41.852371 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-07 00:58:41.852375 | orchestrator | Wednesday 07 January 2026 00:58:32 +0000 (0:00:01.258) 0:10:53.985 ***** 2026-01-07 00:58:41.852380 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.852384 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.852389 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.852393 | orchestrator | 2026-01-07 00:58:41.852398 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-07 00:58:41.852402 | orchestrator | Wednesday 07 January 2026 00:58:33 +0000 (0:00:01.334) 0:10:55.320 ***** 2026-01-07 00:58:41.852407 | orchestrator | changed: [testbed-node-3] 2026-01-07 00:58:41.852411 | orchestrator | changed: [testbed-node-4] 2026-01-07 00:58:41.852416 | orchestrator | changed: [testbed-node-5] 2026-01-07 00:58:41.852420 | orchestrator | 2026-01-07 00:58:41.852425 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-07 00:58:41.852429 | orchestrator | Wednesday 07 January 2026 00:58:35 +0000 (0:00:01.693) 0:10:57.013 ***** 2026-01-07 00:58:41.852434 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-07 00:58:41.852439 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-07 00:58:41.852443 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-07 00:58:41.852448 | orchestrator | 2026-01-07 00:58:41.852452 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-07 00:58:41.852457 | orchestrator | Wednesday 07 January 2026 00:58:37 +0000 (0:00:02.548) 0:10:59.561 ***** 2026-01-07 00:58:41.852461 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.852466 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.852470 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.852475 | orchestrator | 2026-01-07 00:58:41.852492 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-07 00:58:41.852497 | orchestrator | Wednesday 07 January 2026 00:58:38 +0000 (0:00:00.366) 0:10:59.928 ***** 2026-01-07 00:58:41.852501 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 00:58:41.852510 | orchestrator | 2026-01-07 00:58:41.852514 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-07 00:58:41.852519 | orchestrator | Wednesday 07 January 2026 00:58:38 +0000 (0:00:00.560) 0:11:00.489 ***** 2026-01-07 00:58:41.852526 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.852531 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.852536 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.852540 | orchestrator | 2026-01-07 00:58:41.852545 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-07 00:58:41.852549 | orchestrator | Wednesday 07 January 2026 00:58:39 +0000 (0:00:00.640) 0:11:01.129 ***** 2026-01-07 00:58:41.852554 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.852558 | orchestrator | skipping: [testbed-node-4] 2026-01-07 00:58:41.852563 | orchestrator | skipping: [testbed-node-5] 2026-01-07 00:58:41.852567 | orchestrator | 2026-01-07 00:58:41.852572 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-07 00:58:41.852576 | orchestrator | Wednesday 07 January 2026 00:58:39 +0000 (0:00:00.365) 0:11:01.495 ***** 2026-01-07 00:58:41.852581 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 00:58:41.852585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 00:58:41.852590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 00:58:41.852594 | orchestrator | skipping: [testbed-node-3] 2026-01-07 00:58:41.852599 | orchestrator | 2026-01-07 00:58:41.852603 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-07 00:58:41.852608 | orchestrator | Wednesday 07 January 2026 00:58:40 +0000 (0:00:00.652) 0:11:02.148 ***** 2026-01-07 00:58:41.852612 | orchestrator | ok: [testbed-node-3] 2026-01-07 00:58:41.852617 | orchestrator | ok: [testbed-node-4] 2026-01-07 00:58:41.852621 | orchestrator | ok: [testbed-node-5] 2026-01-07 00:58:41.852626 | orchestrator | 2026-01-07 00:58:41.852631 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:58:41.852635 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-07 00:58:41.852640 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-07 00:58:41.852645 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-07 00:58:41.852650 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-07 00:58:41.852654 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-07 00:58:41.852662 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-07 00:58:41.852667 | orchestrator | 2026-01-07 00:58:41.852671 | orchestrator | 2026-01-07 00:58:41.852676 | orchestrator | 2026-01-07 00:58:41.852681 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:58:41.852685 | orchestrator | Wednesday 07 January 2026 00:58:40 +0000 (0:00:00.288) 0:11:02.437 ***** 2026-01-07 00:58:41.852690 | orchestrator | =============================================================================== 2026-01-07 00:58:41.852694 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 53.33s 2026-01-07 00:58:41.852699 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.53s 2026-01-07 00:58:41.852703 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.14s 2026-01-07 00:58:41.852708 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.41s 2026-01-07 00:58:41.852712 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.93s 2026-01-07 00:58:41.852721 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.47s 2026-01-07 00:58:41.852725 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.46s 2026-01-07 00:58:41.852729 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.53s 2026-01-07 00:58:41.852734 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.00s 2026-01-07 00:58:41.852738 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.04s 2026-01-07 00:58:41.852743 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.87s 2026-01-07 00:58:41.852747 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.46s 2026-01-07 00:58:41.852752 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.80s 2026-01-07 00:58:41.852756 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.33s 2026-01-07 00:58:41.852761 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.92s 2026-01-07 00:58:41.852765 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.88s 2026-01-07 00:58:41.852770 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.79s 2026-01-07 00:58:41.852774 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.77s 2026-01-07 00:58:41.852779 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.64s 2026-01-07 00:58:41.852783 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.54s 2026-01-07 00:58:41.852788 | orchestrator | 2026-01-07 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:44.889294 | orchestrator | 2026-01-07 00:58:44 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:44.892293 | orchestrator | 2026-01-07 00:58:44 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:44.894547 | orchestrator | 2026-01-07 00:58:44 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:58:44.895136 | orchestrator | 2026-01-07 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:47.942547 | orchestrator | 2026-01-07 00:58:47 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:47.944119 | orchestrator | 2026-01-07 00:58:47 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:47.946253 | orchestrator | 2026-01-07 00:58:47 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:58:47.946353 | orchestrator | 2026-01-07 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:50.994895 | orchestrator | 2026-01-07 00:58:50 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:50.997044 | orchestrator | 2026-01-07 00:58:50 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:50.998986 | orchestrator | 2026-01-07 00:58:51 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:58:50.999586 | orchestrator | 2026-01-07 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:54.046512 | orchestrator | 2026-01-07 00:58:54 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:54.055822 | orchestrator | 2026-01-07 00:58:54 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:54.059338 | orchestrator | 2026-01-07 00:58:54 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:58:54.059465 | orchestrator | 2026-01-07 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:58:57.105169 | orchestrator | 2026-01-07 00:58:57 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:58:57.108254 | orchestrator | 2026-01-07 00:58:57 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:58:57.110112 | orchestrator | 2026-01-07 00:58:57 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:58:57.110469 | orchestrator | 2026-01-07 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:00.163191 | orchestrator | 2026-01-07 00:59:00 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:59:00.165673 | orchestrator | 2026-01-07 00:59:00 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:59:00.168476 | orchestrator | 2026-01-07 00:59:00 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:00.168555 | orchestrator | 2026-01-07 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:03.202762 | orchestrator | 2026-01-07 00:59:03 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:59:03.203640 | orchestrator | 2026-01-07 00:59:03 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:59:03.204655 | orchestrator | 2026-01-07 00:59:03 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:03.204831 | orchestrator | 2026-01-07 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:06.258292 | orchestrator | 2026-01-07 00:59:06 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state STARTED 2026-01-07 00:59:06.258555 | orchestrator | 2026-01-07 00:59:06 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:59:06.259470 | orchestrator | 2026-01-07 00:59:06 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:06.259578 | orchestrator | 2026-01-07 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:09.312169 | orchestrator | 2026-01-07 00:59:09 | INFO  | Task d55fce8e-2440-465d-92d8-00dfb6663102 is in state SUCCESS 2026-01-07 00:59:09.313578 | orchestrator | 2026-01-07 00:59:09.313634 | orchestrator | 2026-01-07 00:59:09.313643 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:59:09.313652 | orchestrator | 2026-01-07 00:59:09.313659 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:59:09.313667 | orchestrator | Wednesday 07 January 2026 00:56:34 +0000 (0:00:00.262) 0:00:00.262 ***** 2026-01-07 00:59:09.313674 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:09.313682 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:09.313690 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:09.313696 | orchestrator | 2026-01-07 00:59:09.313704 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:59:09.313797 | orchestrator | Wednesday 07 January 2026 00:56:34 +0000 (0:00:00.286) 0:00:00.549 ***** 2026-01-07 00:59:09.313810 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-07 00:59:09.313817 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-07 00:59:09.313823 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-07 00:59:09.313827 | orchestrator | 2026-01-07 00:59:09.313831 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-07 00:59:09.313835 | orchestrator | 2026-01-07 00:59:09.313839 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-07 00:59:09.313843 | orchestrator | Wednesday 07 January 2026 00:56:35 +0000 (0:00:00.457) 0:00:01.006 ***** 2026-01-07 00:59:09.313848 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:09.313865 | orchestrator | 2026-01-07 00:59:09.313870 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-07 00:59:09.313874 | orchestrator | Wednesday 07 January 2026 00:56:35 +0000 (0:00:00.494) 0:00:01.500 ***** 2026-01-07 00:59:09.313877 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 00:59:09.313881 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 00:59:09.313885 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-07 00:59:09.313889 | orchestrator | 2026-01-07 00:59:09.313892 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-07 00:59:09.313896 | orchestrator | Wednesday 07 January 2026 00:56:36 +0000 (0:00:00.752) 0:00:02.252 ***** 2026-01-07 00:59:09.313902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:59:09.313909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:59:09.313921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:59:09.313930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:59:09.313939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:59:09.313944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:59:09.313948 | orchestrator | 2026-01-07 00:59:09.313952 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-07 00:59:09.313956 | orchestrator | Wednesday 07 January 2026 00:56:38 +0000 (0:00:01.908) 0:00:04.161 ***** 2026-01-07 00:59:09.313960 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:09.313964 | orchestrator | 2026-01-07 00:59:09.313968 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-07 00:59:09.313972 | orchestrator | Wednesday 07 January 2026 00:56:38 +0000 (0:00:00.535) 0:00:04.697 ***** 2026-01-07 00:59:09.313982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:59:09.313989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:59:09.313993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:59:09.313998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:59:09.314007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:59:09.314056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:59:09.314063 | orchestrator | 2026-01-07 00:59:09.314067 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-07 00:59:09.314071 | orchestrator | Wednesday 07 January 2026 00:56:41 +0000 (0:00:02.847) 0:00:07.545 ***** 2026-01-07 00:59:09.314075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 00:59:09.314079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 00:59:09.314083 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:09.314091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 00:59:09.314100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 00:59:09.314104 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:09.314108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 00:59:09.314112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 00:59:09.314116 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:09.314120 | orchestrator | 2026-01-07 00:59:09.314123 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-07 00:59:09.314127 | orchestrator | Wednesday 07 January 2026 00:56:42 +0000 (0:00:01.154) 0:00:08.700 ***** 2026-01-07 00:59:09.314135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 00:59:09.314144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 00:59:09.314148 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:09.314152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 00:59:09.314157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 00:59:09.314161 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:09.314167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-07 00:59:09.314177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-07 00:59:09.314181 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:09.314225 | orchestrator | 2026-01-07 00:59:09.314229 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-07 00:59:09.314233 | orchestrator | Wednesday 07 January 2026 00:56:43 +0000 (0:00:01.174) 0:00:09.875 ***** 2026-01-07 00:59:09.314237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:59:09.314241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:59:09.314246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:59:09.314261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:59:09.314266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:59:09.314271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:59:09.314275 | orchestrator | 2026-01-07 00:59:09.314279 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-07 00:59:09.314282 | orchestrator | Wednesday 07 January 2026 00:56:46 +0000 (0:00:02.680) 0:00:12.556 ***** 2026-01-07 00:59:09.314286 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:09.314290 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:09.314297 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:09.314300 | orchestrator | 2026-01-07 00:59:09.314304 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-07 00:59:09.314308 | orchestrator | Wednesday 07 January 2026 00:56:49 +0000 (0:00:02.852) 0:00:15.409 ***** 2026-01-07 00:59:09.314312 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:09.314315 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:09.314319 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:09.314323 | orchestrator | 2026-01-07 00:59:09.314327 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-01-07 00:59:09.314330 | orchestrator | Wednesday 07 January 2026 00:56:51 +0000 (0:00:02.289) 0:00:17.698 ***** 2026-01-07 00:59:09.314341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:59:09.314348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:59:09.314357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-07 00:59:09.314366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:59:09.314382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:59:09.314393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-07 00:59:09.314400 | orchestrator | 2026-01-07 00:59:09.314406 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-07 00:59:09.314412 | orchestrator | Wednesday 07 January 2026 00:56:54 +0000 (0:00:02.516) 0:00:20.215 ***** 2026-01-07 00:59:09.314526 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:09.314537 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:09.314543 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:09.314549 | orchestrator | 2026-01-07 00:59:09.314555 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-07 00:59:09.314561 | orchestrator | Wednesday 07 January 2026 00:56:54 +0000 (0:00:00.291) 0:00:20.506 ***** 2026-01-07 00:59:09.314567 | orchestrator | 2026-01-07 00:59:09.314573 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-07 00:59:09.314580 | orchestrator | Wednesday 07 January 2026 00:56:54 +0000 (0:00:00.058) 0:00:20.565 ***** 2026-01-07 00:59:09.314586 | orchestrator | 2026-01-07 00:59:09.314592 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-07 00:59:09.314599 | orchestrator | Wednesday 07 January 2026 00:56:54 +0000 (0:00:00.058) 0:00:20.623 ***** 2026-01-07 00:59:09.314606 | orchestrator | 2026-01-07 00:59:09.314612 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-07 00:59:09.314618 | orchestrator | Wednesday 07 January 2026 00:56:54 +0000 (0:00:00.061) 0:00:20.684 ***** 2026-01-07 00:59:09.314622 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:09.314633 | orchestrator | 2026-01-07 00:59:09.314637 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-07 00:59:09.314641 | orchestrator | Wednesday 07 January 2026 00:56:55 +0000 (0:00:00.510) 0:00:21.195 ***** 2026-01-07 00:59:09.314644 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:09.314648 | orchestrator | 2026-01-07 00:59:09.314652 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-07 00:59:09.314656 | orchestrator | Wednesday 07 January 2026 00:56:55 +0000 (0:00:00.187) 0:00:21.382 ***** 2026-01-07 00:59:09.314659 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:09.314663 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:09.314667 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:09.314671 | orchestrator | 2026-01-07 00:59:09.314674 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-07 00:59:09.314678 | orchestrator | Wednesday 07 January 2026 00:57:47 +0000 (0:00:52.196) 0:01:13.578 ***** 2026-01-07 00:59:09.314682 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:09.314686 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:09.314692 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:09.314697 | orchestrator | 2026-01-07 00:59:09.314701 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-07 00:59:09.314705 | orchestrator | Wednesday 07 January 2026 00:58:57 +0000 (0:01:09.544) 0:02:23.123 ***** 2026-01-07 00:59:09.314709 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:09.314713 | orchestrator | 2026-01-07 00:59:09.314717 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-07 00:59:09.314720 | orchestrator | Wednesday 07 January 2026 00:58:57 +0000 (0:00:00.722) 0:02:23.846 ***** 2026-01-07 00:59:09.314724 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:09.314728 | orchestrator | 2026-01-07 00:59:09.314732 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-07 00:59:09.314736 | orchestrator | Wednesday 07 January 2026 00:59:00 +0000 (0:00:02.278) 0:02:26.125 ***** 2026-01-07 00:59:09.314740 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:09.314743 | orchestrator | 2026-01-07 00:59:09.314747 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-07 00:59:09.314751 | orchestrator | Wednesday 07 January 2026 00:59:02 +0000 (0:00:02.135) 0:02:28.260 ***** 2026-01-07 00:59:09.314754 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:09.314758 | orchestrator | 2026-01-07 00:59:09.314762 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-07 00:59:09.314766 | orchestrator | Wednesday 07 January 2026 00:59:04 +0000 (0:00:02.644) 0:02:30.905 ***** 2026-01-07 00:59:09.314770 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:09.314773 | orchestrator | 2026-01-07 00:59:09.314782 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:59:09.314787 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 00:59:09.314791 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-07 00:59:09.314799 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-07 00:59:09.314802 | orchestrator | 2026-01-07 00:59:09.314806 | orchestrator | 2026-01-07 00:59:09.314810 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:59:09.314814 | orchestrator | Wednesday 07 January 2026 00:59:07 +0000 (0:00:02.499) 0:02:33.404 ***** 2026-01-07 00:59:09.314817 | orchestrator | =============================================================================== 2026-01-07 00:59:09.314821 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 69.54s 2026-01-07 00:59:09.314828 | orchestrator | opensearch : Restart opensearch container ------------------------------ 52.20s 2026-01-07 00:59:09.314832 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.85s 2026-01-07 00:59:09.314836 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.85s 2026-01-07 00:59:09.314840 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.68s 2026-01-07 00:59:09.314843 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.64s 2026-01-07 00:59:09.314847 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.52s 2026-01-07 00:59:09.314851 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.50s 2026-01-07 00:59:09.314855 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.29s 2026-01-07 00:59:09.314858 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.28s 2026-01-07 00:59:09.314862 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.14s 2026-01-07 00:59:09.314866 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.91s 2026-01-07 00:59:09.314869 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.17s 2026-01-07 00:59:09.314873 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.15s 2026-01-07 00:59:09.314877 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.75s 2026-01-07 00:59:09.314880 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.72s 2026-01-07 00:59:09.314884 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-01-07 00:59:09.314888 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.51s 2026-01-07 00:59:09.314891 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2026-01-07 00:59:09.314895 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-01-07 00:59:09.314899 | orchestrator | 2026-01-07 00:59:09 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:59:09.315552 | orchestrator | 2026-01-07 00:59:09 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:09.315572 | orchestrator | 2026-01-07 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:12.355720 | orchestrator | 2026-01-07 00:59:12 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:59:12.357148 | orchestrator | 2026-01-07 00:59:12 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:12.357577 | orchestrator | 2026-01-07 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:15.405818 | orchestrator | 2026-01-07 00:59:15 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:59:15.407285 | orchestrator | 2026-01-07 00:59:15 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:15.407349 | orchestrator | 2026-01-07 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:18.449699 | orchestrator | 2026-01-07 00:59:18 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:59:18.451649 | orchestrator | 2026-01-07 00:59:18 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:18.451693 | orchestrator | 2026-01-07 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:21.502445 | orchestrator | 2026-01-07 00:59:21 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:59:21.504612 | orchestrator | 2026-01-07 00:59:21 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:21.504740 | orchestrator | 2026-01-07 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:24.553419 | orchestrator | 2026-01-07 00:59:24 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:59:24.554093 | orchestrator | 2026-01-07 00:59:24 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:24.554222 | orchestrator | 2026-01-07 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:27.600986 | orchestrator | 2026-01-07 00:59:27 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state STARTED 2026-01-07 00:59:27.602203 | orchestrator | 2026-01-07 00:59:27 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:27.602243 | orchestrator | 2026-01-07 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:30.648228 | orchestrator | 2026-01-07 00:59:30 | INFO  | Task af23d6fc-5d7b-4978-936f-80f3cc8f17f3 is in state SUCCESS 2026-01-07 00:59:30.649624 | orchestrator | 2026-01-07 00:59:30.649666 | orchestrator | 2026-01-07 00:59:30.649675 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-07 00:59:30.649683 | orchestrator | 2026-01-07 00:59:30.649690 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-07 00:59:30.649697 | orchestrator | Wednesday 07 January 2026 00:56:34 +0000 (0:00:00.088) 0:00:00.088 ***** 2026-01-07 00:59:30.649704 | orchestrator | ok: [localhost] => { 2026-01-07 00:59:30.649713 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-07 00:59:30.649720 | orchestrator | } 2026-01-07 00:59:30.649727 | orchestrator | 2026-01-07 00:59:30.649734 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-07 00:59:30.649741 | orchestrator | Wednesday 07 January 2026 00:56:34 +0000 (0:00:00.057) 0:00:00.146 ***** 2026-01-07 00:59:30.649748 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-07 00:59:30.649756 | orchestrator | ...ignoring 2026-01-07 00:59:30.649763 | orchestrator | 2026-01-07 00:59:30.649898 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-07 00:59:30.649906 | orchestrator | Wednesday 07 January 2026 00:56:37 +0000 (0:00:02.802) 0:00:02.948 ***** 2026-01-07 00:59:30.649913 | orchestrator | skipping: [localhost] 2026-01-07 00:59:30.649959 | orchestrator | 2026-01-07 00:59:30.649967 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-07 00:59:30.649974 | orchestrator | Wednesday 07 January 2026 00:56:37 +0000 (0:00:00.057) 0:00:03.006 ***** 2026-01-07 00:59:30.649980 | orchestrator | ok: [localhost] 2026-01-07 00:59:30.649986 | orchestrator | 2026-01-07 00:59:30.649992 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 00:59:30.649998 | orchestrator | 2026-01-07 00:59:30.650005 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 00:59:30.650042 | orchestrator | Wednesday 07 January 2026 00:56:37 +0000 (0:00:00.157) 0:00:03.164 ***** 2026-01-07 00:59:30.650052 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:30.650060 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:30.650067 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:30.650074 | orchestrator | 2026-01-07 00:59:30.650082 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 00:59:30.650090 | orchestrator | Wednesday 07 January 2026 00:56:37 +0000 (0:00:00.321) 0:00:03.486 ***** 2026-01-07 00:59:30.650098 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-07 00:59:30.650105 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-07 00:59:30.650113 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-07 00:59:30.650121 | orchestrator | 2026-01-07 00:59:30.650128 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-07 00:59:30.650154 | orchestrator | 2026-01-07 00:59:30.650161 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-07 00:59:30.650168 | orchestrator | Wednesday 07 January 2026 00:56:38 +0000 (0:00:00.682) 0:00:04.168 ***** 2026-01-07 00:59:30.650174 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-07 00:59:30.650181 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-07 00:59:30.650187 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-07 00:59:30.650194 | orchestrator | 2026-01-07 00:59:30.650201 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-07 00:59:30.650208 | orchestrator | Wednesday 07 January 2026 00:56:38 +0000 (0:00:00.361) 0:00:04.530 ***** 2026-01-07 00:59:30.650216 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:30.650223 | orchestrator | 2026-01-07 00:59:30.650230 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-07 00:59:30.650237 | orchestrator | Wednesday 07 January 2026 00:56:39 +0000 (0:00:00.524) 0:00:05.054 ***** 2026-01-07 00:59:30.650266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:59:30.650277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:59:30.650293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:59:30.650301 | orchestrator | 2026-01-07 00:59:30.650314 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-07 00:59:30.650321 | orchestrator | Wednesday 07 January 2026 00:56:42 +0000 (0:00:02.975) 0:00:08.029 ***** 2026-01-07 00:59:30.650328 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:30.650335 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.650342 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.650350 | orchestrator | 2026-01-07 00:59:30.650357 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-07 00:59:30.650364 | orchestrator | Wednesday 07 January 2026 00:56:43 +0000 (0:00:00.826) 0:00:08.856 ***** 2026-01-07 00:59:30.650404 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.650412 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.650419 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:30.650425 | orchestrator | 2026-01-07 00:59:30.650432 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-07 00:59:30.650438 | orchestrator | Wednesday 07 January 2026 00:56:44 +0000 (0:00:01.800) 0:00:10.657 ***** 2026-01-07 00:59:30.650446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:59:30.650470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:59:30.650480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:59:30.650492 | orchestrator | 2026-01-07 00:59:30.650499 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-07 00:59:30.650507 | orchestrator | Wednesday 07 January 2026 00:56:48 +0000 (0:00:04.014) 0:00:14.671 ***** 2026-01-07 00:59:30.650514 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.650521 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.650528 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:30.650535 | orchestrator | 2026-01-07 00:59:30.650542 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-07 00:59:30.650548 | orchestrator | Wednesday 07 January 2026 00:56:49 +0000 (0:00:01.090) 0:00:15.762 ***** 2026-01-07 00:59:30.650555 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:30.650561 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:30.650569 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:30.650575 | orchestrator | 2026-01-07 00:59:30.650582 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-07 00:59:30.650589 | orchestrator | Wednesday 07 January 2026 00:56:54 +0000 (0:00:04.515) 0:00:20.277 ***** 2026-01-07 00:59:30.650596 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:30.650604 | orchestrator | 2026-01-07 00:59:30.650611 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-07 00:59:30.650618 | orchestrator | Wednesday 07 January 2026 00:56:54 +0000 (0:00:00.448) 0:00:20.726 ***** 2026-01-07 00:59:30.650636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:59:30.650651 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:30.650660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:59:30.650669 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.650685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:59:30.650698 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.650705 | orchestrator | 2026-01-07 00:59:30.650712 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-07 00:59:30.650719 | orchestrator | Wednesday 07 January 2026 00:56:57 +0000 (0:00:03.063) 0:00:23.790 ***** 2026-01-07 00:59:30.650727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:59:30.650736 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:30.650752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:59:30.650765 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.650773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:59:30.650782 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.650789 | orchestrator | 2026-01-07 00:59:30.650796 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-07 00:59:30.650804 | orchestrator | Wednesday 07 January 2026 00:57:00 +0000 (0:00:02.741) 0:00:26.531 ***** 2026-01-07 00:59:30.650820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:59:30.650837 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.650847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:59:30.650856 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.650868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-07 00:59:30.650881 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:30.650889 | orchestrator | 2026-01-07 00:59:30.650898 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-01-07 00:59:30.650906 | orchestrator | Wednesday 07 January 2026 00:57:03 +0000 (0:00:02.454) 0:00:28.986 ***** 2026-01-07 00:59:30.650919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:59:30.650927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:59:30.650949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-07 00:59:30.650958 | orchestrator | 2026-01-07 00:59:30.650966 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-07 00:59:30.650973 | orchestrator | Wednesday 07 January 2026 00:57:07 +0000 (0:00:04.080) 0:00:33.067 ***** 2026-01-07 00:59:30.650981 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:30.650989 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:30.650997 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:30.651005 | orchestrator | 2026-01-07 00:59:30.651013 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-07 00:59:30.651022 | orchestrator | Wednesday 07 January 2026 00:57:08 +0000 (0:00:00.807) 0:00:33.875 ***** 2026-01-07 00:59:30.651029 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:30.651037 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:30.651044 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:30.651052 | orchestrator | 2026-01-07 00:59:30.651059 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-07 00:59:30.651066 | orchestrator | Wednesday 07 January 2026 00:57:08 +0000 (0:00:00.569) 0:00:34.444 ***** 2026-01-07 00:59:30.651073 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:30.651080 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:30.651087 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:30.651093 | orchestrator | 2026-01-07 00:59:30.651100 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-07 00:59:30.651107 | orchestrator | Wednesday 07 January 2026 00:57:09 +0000 (0:00:00.359) 0:00:34.804 ***** 2026-01-07 00:59:30.651115 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-07 00:59:30.651123 | orchestrator | ...ignoring 2026-01-07 00:59:30.651130 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-07 00:59:30.651137 | orchestrator | ...ignoring 2026-01-07 00:59:30.651144 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-07 00:59:30.651156 | orchestrator | ...ignoring 2026-01-07 00:59:30.651163 | orchestrator | 2026-01-07 00:59:30.651170 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-07 00:59:30.651176 | orchestrator | Wednesday 07 January 2026 00:57:19 +0000 (0:00:10.949) 0:00:45.754 ***** 2026-01-07 00:59:30.651182 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:30.651189 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:30.651195 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:30.651202 | orchestrator | 2026-01-07 00:59:30.651208 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-07 00:59:30.651215 | orchestrator | Wednesday 07 January 2026 00:57:20 +0000 (0:00:00.463) 0:00:46.217 ***** 2026-01-07 00:59:30.651221 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:30.651228 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.651234 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.651240 | orchestrator | 2026-01-07 00:59:30.651247 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-07 00:59:30.651253 | orchestrator | Wednesday 07 January 2026 00:57:21 +0000 (0:00:00.774) 0:00:46.991 ***** 2026-01-07 00:59:30.651260 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:30.651267 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.651273 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.651280 | orchestrator | 2026-01-07 00:59:30.651290 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-07 00:59:30.651297 | orchestrator | Wednesday 07 January 2026 00:57:21 +0000 (0:00:00.527) 0:00:47.519 ***** 2026-01-07 00:59:30.651304 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:30.651310 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.651317 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.651324 | orchestrator | 2026-01-07 00:59:30.651330 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-07 00:59:30.651342 | orchestrator | Wednesday 07 January 2026 00:57:22 +0000 (0:00:00.582) 0:00:48.101 ***** 2026-01-07 00:59:30.651349 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:30.651355 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:30.651363 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:30.651382 | orchestrator | 2026-01-07 00:59:30.651390 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-07 00:59:30.651397 | orchestrator | Wednesday 07 January 2026 00:57:22 +0000 (0:00:00.454) 0:00:48.556 ***** 2026-01-07 00:59:30.651405 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:30.651411 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.651418 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.651424 | orchestrator | 2026-01-07 00:59:30.651431 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-07 00:59:30.651437 | orchestrator | Wednesday 07 January 2026 00:57:23 +0000 (0:00:00.644) 0:00:49.200 ***** 2026-01-07 00:59:30.651444 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.651451 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.651457 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-07 00:59:30.651464 | orchestrator | 2026-01-07 00:59:30.651470 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-07 00:59:30.651476 | orchestrator | Wednesday 07 January 2026 00:57:23 +0000 (0:00:00.391) 0:00:49.592 ***** 2026-01-07 00:59:30.651483 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:30.651490 | orchestrator | 2026-01-07 00:59:30.651497 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-07 00:59:30.651503 | orchestrator | Wednesday 07 January 2026 00:57:33 +0000 (0:00:09.760) 0:00:59.352 ***** 2026-01-07 00:59:30.651510 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:30.651516 | orchestrator | 2026-01-07 00:59:30.651523 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-07 00:59:30.651530 | orchestrator | Wednesday 07 January 2026 00:57:33 +0000 (0:00:00.133) 0:00:59.485 ***** 2026-01-07 00:59:30.651543 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:30.651549 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.651556 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.651563 | orchestrator | 2026-01-07 00:59:30.651570 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-07 00:59:30.651576 | orchestrator | Wednesday 07 January 2026 00:57:34 +0000 (0:00:01.016) 0:01:00.501 ***** 2026-01-07 00:59:30.651582 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:30.651588 | orchestrator | 2026-01-07 00:59:30.651595 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-07 00:59:30.651602 | orchestrator | Wednesday 07 January 2026 00:57:42 +0000 (0:00:07.307) 0:01:07.809 ***** 2026-01-07 00:59:30.651609 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:30.651616 | orchestrator | 2026-01-07 00:59:30.651623 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-07 00:59:30.651630 | orchestrator | Wednesday 07 January 2026 00:57:43 +0000 (0:00:01.600) 0:01:09.409 ***** 2026-01-07 00:59:30.651637 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:30.651644 | orchestrator | 2026-01-07 00:59:30.651651 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-07 00:59:30.651658 | orchestrator | Wednesday 07 January 2026 00:57:45 +0000 (0:00:02.016) 0:01:11.426 ***** 2026-01-07 00:59:30.651665 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:30.651673 | orchestrator | 2026-01-07 00:59:30.651679 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-07 00:59:30.651685 | orchestrator | Wednesday 07 January 2026 00:57:45 +0000 (0:00:00.117) 0:01:11.544 ***** 2026-01-07 00:59:30.651691 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:30.651698 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.651706 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.651713 | orchestrator | 2026-01-07 00:59:30.651720 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-07 00:59:30.651727 | orchestrator | Wednesday 07 January 2026 00:57:46 +0000 (0:00:00.268) 0:01:11.812 ***** 2026-01-07 00:59:30.651734 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:30.651741 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-07 00:59:30.651748 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:30.651755 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:30.651763 | orchestrator | 2026-01-07 00:59:30.651770 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-07 00:59:30.651777 | orchestrator | skipping: no hosts matched 2026-01-07 00:59:30.651784 | orchestrator | 2026-01-07 00:59:30.651791 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-07 00:59:30.651798 | orchestrator | 2026-01-07 00:59:30.651805 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-07 00:59:30.651813 | orchestrator | Wednesday 07 January 2026 00:57:46 +0000 (0:00:00.441) 0:01:12.254 ***** 2026-01-07 00:59:30.651819 | orchestrator | changed: [testbed-node-1] 2026-01-07 00:59:30.651826 | orchestrator | 2026-01-07 00:59:30.651833 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-07 00:59:30.651841 | orchestrator | Wednesday 07 January 2026 00:58:02 +0000 (0:00:15.901) 0:01:28.155 ***** 2026-01-07 00:59:30.651848 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:30.651855 | orchestrator | 2026-01-07 00:59:30.651862 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-07 00:59:30.651869 | orchestrator | Wednesday 07 January 2026 00:58:17 +0000 (0:00:15.569) 0:01:43.725 ***** 2026-01-07 00:59:30.651877 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:30.651884 | orchestrator | 2026-01-07 00:59:30.651891 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-07 00:59:30.651898 | orchestrator | 2026-01-07 00:59:30.651909 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-07 00:59:30.651921 | orchestrator | Wednesday 07 January 2026 00:58:20 +0000 (0:00:02.627) 0:01:46.352 ***** 2026-01-07 00:59:30.651929 | orchestrator | changed: [testbed-node-2] 2026-01-07 00:59:30.651936 | orchestrator | 2026-01-07 00:59:30.651943 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-07 00:59:30.651957 | orchestrator | Wednesday 07 January 2026 00:58:38 +0000 (0:00:17.500) 0:02:03.853 ***** 2026-01-07 00:59:30.651964 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:30.651971 | orchestrator | 2026-01-07 00:59:30.651978 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-07 00:59:30.651985 | orchestrator | Wednesday 07 January 2026 00:58:53 +0000 (0:00:15.588) 0:02:19.441 ***** 2026-01-07 00:59:30.651991 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:30.651998 | orchestrator | 2026-01-07 00:59:30.652005 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-07 00:59:30.652013 | orchestrator | 2026-01-07 00:59:30.652020 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-07 00:59:30.652027 | orchestrator | Wednesday 07 January 2026 00:58:56 +0000 (0:00:02.565) 0:02:22.007 ***** 2026-01-07 00:59:30.652034 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:30.652041 | orchestrator | 2026-01-07 00:59:30.652048 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-07 00:59:30.652055 | orchestrator | Wednesday 07 January 2026 00:59:13 +0000 (0:00:17.009) 0:02:39.016 ***** 2026-01-07 00:59:30.652063 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:30.652070 | orchestrator | 2026-01-07 00:59:30.652077 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-07 00:59:30.652084 | orchestrator | Wednesday 07 January 2026 00:59:13 +0000 (0:00:00.568) 0:02:39.584 ***** 2026-01-07 00:59:30.652091 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:30.652098 | orchestrator | 2026-01-07 00:59:30.652106 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-07 00:59:30.652113 | orchestrator | 2026-01-07 00:59:30.652120 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-07 00:59:30.652127 | orchestrator | Wednesday 07 January 2026 00:59:16 +0000 (0:00:02.946) 0:02:42.531 ***** 2026-01-07 00:59:30.652134 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 00:59:30.652142 | orchestrator | 2026-01-07 00:59:30.652149 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-07 00:59:30.652156 | orchestrator | Wednesday 07 January 2026 00:59:17 +0000 (0:00:00.573) 0:02:43.104 ***** 2026-01-07 00:59:30.652163 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.652171 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.652178 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:30.652185 | orchestrator | 2026-01-07 00:59:30.652192 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-07 00:59:30.652200 | orchestrator | Wednesday 07 January 2026 00:59:19 +0000 (0:00:02.587) 0:02:45.692 ***** 2026-01-07 00:59:30.652207 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.652214 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.652222 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:30.652229 | orchestrator | 2026-01-07 00:59:30.652236 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-07 00:59:30.652243 | orchestrator | Wednesday 07 January 2026 00:59:22 +0000 (0:00:02.473) 0:02:48.165 ***** 2026-01-07 00:59:30.652251 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.652258 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.652265 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:30.652273 | orchestrator | 2026-01-07 00:59:30.652280 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-07 00:59:30.652287 | orchestrator | Wednesday 07 January 2026 00:59:24 +0000 (0:00:02.067) 0:02:50.233 ***** 2026-01-07 00:59:30.652295 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.652302 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.652314 | orchestrator | changed: [testbed-node-0] 2026-01-07 00:59:30.652321 | orchestrator | 2026-01-07 00:59:30.652328 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-07 00:59:30.652335 | orchestrator | Wednesday 07 January 2026 00:59:26 +0000 (0:00:02.136) 0:02:52.370 ***** 2026-01-07 00:59:30.652343 | orchestrator | ok: [testbed-node-0] 2026-01-07 00:59:30.652350 | orchestrator | ok: [testbed-node-1] 2026-01-07 00:59:30.652357 | orchestrator | ok: [testbed-node-2] 2026-01-07 00:59:30.652364 | orchestrator | 2026-01-07 00:59:30.652407 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-07 00:59:30.652415 | orchestrator | Wednesday 07 January 2026 00:59:29 +0000 (0:00:03.162) 0:02:55.533 ***** 2026-01-07 00:59:30.652422 | orchestrator | skipping: [testbed-node-0] 2026-01-07 00:59:30.652430 | orchestrator | skipping: [testbed-node-1] 2026-01-07 00:59:30.652437 | orchestrator | skipping: [testbed-node-2] 2026-01-07 00:59:30.652444 | orchestrator | 2026-01-07 00:59:30.652451 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 00:59:30.652459 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-07 00:59:30.652467 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-01-07 00:59:30.652475 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-07 00:59:30.652483 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-07 00:59:30.652490 | orchestrator | 2026-01-07 00:59:30.652498 | orchestrator | 2026-01-07 00:59:30.652505 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 00:59:30.652516 | orchestrator | Wednesday 07 January 2026 00:59:29 +0000 (0:00:00.246) 0:02:55.780 ***** 2026-01-07 00:59:30.652523 | orchestrator | =============================================================================== 2026-01-07 00:59:30.652530 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 33.40s 2026-01-07 00:59:30.652538 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.16s 2026-01-07 00:59:30.652549 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.01s 2026-01-07 00:59:30.652557 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.95s 2026-01-07 00:59:30.652564 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.76s 2026-01-07 00:59:30.652572 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.31s 2026-01-07 00:59:30.652579 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.19s 2026-01-07 00:59:30.652586 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.52s 2026-01-07 00:59:30.652594 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.08s 2026-01-07 00:59:30.652601 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.01s 2026-01-07 00:59:30.652608 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.16s 2026-01-07 00:59:30.652615 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.06s 2026-01-07 00:59:30.652623 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.98s 2026-01-07 00:59:30.652630 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.95s 2026-01-07 00:59:30.652637 | orchestrator | Check MariaDB service --------------------------------------------------- 2.80s 2026-01-07 00:59:30.652644 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.74s 2026-01-07 00:59:30.652651 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.59s 2026-01-07 00:59:30.652663 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.47s 2026-01-07 00:59:30.652670 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.45s 2026-01-07 00:59:30.652676 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.14s 2026-01-07 00:59:30.652683 | orchestrator | 2026-01-07 00:59:30 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:30.652689 | orchestrator | 2026-01-07 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:33.708481 | orchestrator | 2026-01-07 00:59:33 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 00:59:33.709486 | orchestrator | 2026-01-07 00:59:33 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 00:59:33.712407 | orchestrator | 2026-01-07 00:59:33 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:33.712624 | orchestrator | 2026-01-07 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:36.743543 | orchestrator | 2026-01-07 00:59:36 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 00:59:36.745584 | orchestrator | 2026-01-07 00:59:36 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 00:59:36.747226 | orchestrator | 2026-01-07 00:59:36 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:36.747274 | orchestrator | 2026-01-07 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:39.786803 | orchestrator | 2026-01-07 00:59:39 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 00:59:39.789865 | orchestrator | 2026-01-07 00:59:39 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 00:59:39.792759 | orchestrator | 2026-01-07 00:59:39 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:39.792829 | orchestrator | 2026-01-07 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:42.839969 | orchestrator | 2026-01-07 00:59:42 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 00:59:42.842663 | orchestrator | 2026-01-07 00:59:42 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 00:59:42.843432 | orchestrator | 2026-01-07 00:59:42 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:42.843467 | orchestrator | 2026-01-07 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:45.902255 | orchestrator | 2026-01-07 00:59:45 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 00:59:45.902847 | orchestrator | 2026-01-07 00:59:45 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 00:59:45.905090 | orchestrator | 2026-01-07 00:59:45 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:45.905120 | orchestrator | 2026-01-07 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:48.947024 | orchestrator | 2026-01-07 00:59:48 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 00:59:48.947581 | orchestrator | 2026-01-07 00:59:48 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 00:59:48.948701 | orchestrator | 2026-01-07 00:59:48 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:48.948748 | orchestrator | 2026-01-07 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:51.982083 | orchestrator | 2026-01-07 00:59:51 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 00:59:51.983133 | orchestrator | 2026-01-07 00:59:51 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 00:59:51.984641 | orchestrator | 2026-01-07 00:59:51 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:51.984680 | orchestrator | 2026-01-07 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:55.056491 | orchestrator | 2026-01-07 00:59:55 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 00:59:55.058450 | orchestrator | 2026-01-07 00:59:55 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 00:59:55.059799 | orchestrator | 2026-01-07 00:59:55 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:55.059849 | orchestrator | 2026-01-07 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 00:59:58.097887 | orchestrator | 2026-01-07 00:59:58 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 00:59:58.099473 | orchestrator | 2026-01-07 00:59:58 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 00:59:58.101042 | orchestrator | 2026-01-07 00:59:58 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 00:59:58.101102 | orchestrator | 2026-01-07 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:01.137763 | orchestrator | 2026-01-07 01:00:01 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:01.138548 | orchestrator | 2026-01-07 01:00:01 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:01.139123 | orchestrator | 2026-01-07 01:00:01 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 01:00:01.139150 | orchestrator | 2026-01-07 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:04.170977 | orchestrator | 2026-01-07 01:00:04 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:04.171819 | orchestrator | 2026-01-07 01:00:04 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:04.173086 | orchestrator | 2026-01-07 01:00:04 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 01:00:04.173123 | orchestrator | 2026-01-07 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:07.216096 | orchestrator | 2026-01-07 01:00:07 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:07.219270 | orchestrator | 2026-01-07 01:00:07 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:07.221869 | orchestrator | 2026-01-07 01:00:07 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 01:00:07.222395 | orchestrator | 2026-01-07 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:10.271243 | orchestrator | 2026-01-07 01:00:10 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:10.272192 | orchestrator | 2026-01-07 01:00:10 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:10.273690 | orchestrator | 2026-01-07 01:00:10 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 01:00:10.273724 | orchestrator | 2026-01-07 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:13.324414 | orchestrator | 2026-01-07 01:00:13 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:13.326158 | orchestrator | 2026-01-07 01:00:13 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:13.328606 | orchestrator | 2026-01-07 01:00:13 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 01:00:13.328661 | orchestrator | 2026-01-07 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:16.368917 | orchestrator | 2026-01-07 01:00:16 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:16.370954 | orchestrator | 2026-01-07 01:00:16 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:16.373893 | orchestrator | 2026-01-07 01:00:16 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 01:00:16.373939 | orchestrator | 2026-01-07 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:19.419252 | orchestrator | 2026-01-07 01:00:19 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:19.421300 | orchestrator | 2026-01-07 01:00:19 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:19.423528 | orchestrator | 2026-01-07 01:00:19 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 01:00:19.423684 | orchestrator | 2026-01-07 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:22.461630 | orchestrator | 2026-01-07 01:00:22 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:22.461728 | orchestrator | 2026-01-07 01:00:22 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:22.461754 | orchestrator | 2026-01-07 01:00:22 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 01:00:22.461775 | orchestrator | 2026-01-07 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:25.509520 | orchestrator | 2026-01-07 01:00:25 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:25.511662 | orchestrator | 2026-01-07 01:00:25 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:25.514361 | orchestrator | 2026-01-07 01:00:25 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 01:00:25.514412 | orchestrator | 2026-01-07 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:28.553914 | orchestrator | 2026-01-07 01:00:28 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:28.554775 | orchestrator | 2026-01-07 01:00:28 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:28.554809 | orchestrator | 2026-01-07 01:00:28 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 01:00:28.554816 | orchestrator | 2026-01-07 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:31.596754 | orchestrator | 2026-01-07 01:00:31 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:31.598461 | orchestrator | 2026-01-07 01:00:31 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:31.600779 | orchestrator | 2026-01-07 01:00:31 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 01:00:31.600832 | orchestrator | 2026-01-07 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:34.650933 | orchestrator | 2026-01-07 01:00:34 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:34.652654 | orchestrator | 2026-01-07 01:00:34 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:34.654197 | orchestrator | 2026-01-07 01:00:34 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 01:00:34.654261 | orchestrator | 2026-01-07 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:37.699202 | orchestrator | 2026-01-07 01:00:37 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:37.699292 | orchestrator | 2026-01-07 01:00:37 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:37.699301 | orchestrator | 2026-01-07 01:00:37 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 01:00:37.699308 | orchestrator | 2026-01-07 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:40.742863 | orchestrator | 2026-01-07 01:00:40 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:40.745849 | orchestrator | 2026-01-07 01:00:40 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:40.747982 | orchestrator | 2026-01-07 01:00:40 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 01:00:40.748057 | orchestrator | 2026-01-07 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:43.797609 | orchestrator | 2026-01-07 01:00:43 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:43.799769 | orchestrator | 2026-01-07 01:00:43 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:43.800972 | orchestrator | 2026-01-07 01:00:43 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 01:00:43.801002 | orchestrator | 2026-01-07 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:46.848023 | orchestrator | 2026-01-07 01:00:46 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:46.851004 | orchestrator | 2026-01-07 01:00:46 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:46.852769 | orchestrator | 2026-01-07 01:00:46 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state STARTED 2026-01-07 01:00:46.853130 | orchestrator | 2026-01-07 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:49.894811 | orchestrator | 2026-01-07 01:00:49 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:49.897961 | orchestrator | 2026-01-07 01:00:49 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:49.901988 | orchestrator | 2026-01-07 01:00:49 | INFO  | Task 30998aa8-2693-4173-b4e4-9c9b13c200ad is in state SUCCESS 2026-01-07 01:00:49.905400 | orchestrator | 2026-01-07 01:00:49.905452 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-07 01:00:49.905493 | orchestrator | 2.16.14 2026-01-07 01:00:49.905502 | orchestrator | 2026-01-07 01:00:49.905567 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-07 01:00:49.905574 | orchestrator | 2026-01-07 01:00:49.905580 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-07 01:00:49.905587 | orchestrator | Wednesday 07 January 2026 00:58:45 +0000 (0:00:00.596) 0:00:00.596 ***** 2026-01-07 01:00:49.905593 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:00:49.905599 | orchestrator | 2026-01-07 01:00:49.905605 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-07 01:00:49.905611 | orchestrator | Wednesday 07 January 2026 00:58:46 +0000 (0:00:00.499) 0:00:01.095 ***** 2026-01-07 01:00:49.905632 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:00:49.905638 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:00:49.905644 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:00:49.905650 | orchestrator | 2026-01-07 01:00:49.905656 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-07 01:00:49.905663 | orchestrator | Wednesday 07 January 2026 00:58:47 +0000 (0:00:00.655) 0:00:01.750 ***** 2026-01-07 01:00:49.905669 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:00:49.905675 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:00:49.905682 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:00:49.905688 | orchestrator | 2026-01-07 01:00:49.905721 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-07 01:00:49.905727 | orchestrator | Wednesday 07 January 2026 00:58:47 +0000 (0:00:00.293) 0:00:02.044 ***** 2026-01-07 01:00:49.905734 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:00:49.905740 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:00:49.905744 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:00:49.905748 | orchestrator | 2026-01-07 01:00:49.905752 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-07 01:00:49.906277 | orchestrator | Wednesday 07 January 2026 00:58:48 +0000 (0:00:00.800) 0:00:02.844 ***** 2026-01-07 01:00:49.906296 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:00:49.906301 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:00:49.906305 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:00:49.906309 | orchestrator | 2026-01-07 01:00:49.906313 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-07 01:00:49.906317 | orchestrator | Wednesday 07 January 2026 00:58:48 +0000 (0:00:00.291) 0:00:03.136 ***** 2026-01-07 01:00:49.906321 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:00:49.906324 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:00:49.906328 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:00:49.906332 | orchestrator | 2026-01-07 01:00:49.906336 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-07 01:00:49.906340 | orchestrator | Wednesday 07 January 2026 00:58:48 +0000 (0:00:00.275) 0:00:03.412 ***** 2026-01-07 01:00:49.906344 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:00:49.906348 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:00:49.906351 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:00:49.906355 | orchestrator | 2026-01-07 01:00:49.906359 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-07 01:00:49.906363 | orchestrator | Wednesday 07 January 2026 00:58:48 +0000 (0:00:00.273) 0:00:03.685 ***** 2026-01-07 01:00:49.906367 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.906371 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.906375 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.906379 | orchestrator | 2026-01-07 01:00:49.906382 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-07 01:00:49.906386 | orchestrator | Wednesday 07 January 2026 00:58:49 +0000 (0:00:00.481) 0:00:04.167 ***** 2026-01-07 01:00:49.906390 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:00:49.906393 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:00:49.906397 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:00:49.906401 | orchestrator | 2026-01-07 01:00:49.906405 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-07 01:00:49.906412 | orchestrator | Wednesday 07 January 2026 00:58:49 +0000 (0:00:00.268) 0:00:04.436 ***** 2026-01-07 01:00:49.906416 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 01:00:49.906420 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 01:00:49.906423 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 01:00:49.906427 | orchestrator | 2026-01-07 01:00:49.906431 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-07 01:00:49.906435 | orchestrator | Wednesday 07 January 2026 00:58:50 +0000 (0:00:00.612) 0:00:05.049 ***** 2026-01-07 01:00:49.906445 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:00:49.906449 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:00:49.906453 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:00:49.906457 | orchestrator | 2026-01-07 01:00:49.906460 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-07 01:00:49.906464 | orchestrator | Wednesday 07 January 2026 00:58:50 +0000 (0:00:00.436) 0:00:05.486 ***** 2026-01-07 01:00:49.906468 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 01:00:49.906472 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 01:00:49.906478 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 01:00:49.906485 | orchestrator | 2026-01-07 01:00:49.906489 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-07 01:00:49.906493 | orchestrator | Wednesday 07 January 2026 00:58:52 +0000 (0:00:02.134) 0:00:07.621 ***** 2026-01-07 01:00:49.906496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-07 01:00:49.906500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-07 01:00:49.906504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-07 01:00:49.906508 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.906512 | orchestrator | 2026-01-07 01:00:49.906565 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-07 01:00:49.906571 | orchestrator | Wednesday 07 January 2026 00:58:53 +0000 (0:00:00.701) 0:00:08.322 ***** 2026-01-07 01:00:49.906576 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.906581 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.906585 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.906589 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.906593 | orchestrator | 2026-01-07 01:00:49.906597 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-07 01:00:49.906600 | orchestrator | Wednesday 07 January 2026 00:58:54 +0000 (0:00:00.900) 0:00:09.222 ***** 2026-01-07 01:00:49.906605 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.906611 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.906615 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.906622 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.906626 | orchestrator | 2026-01-07 01:00:49.906630 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-07 01:00:49.906634 | orchestrator | Wednesday 07 January 2026 00:58:54 +0000 (0:00:00.345) 0:00:09.568 ***** 2026-01-07 01:00:49.906641 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '536cef695c75', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-07 00:58:51.457260', 'end': '2026-01-07 00:58:51.491992', 'delta': '0:00:00.034732', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['536cef695c75'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-07 01:00:49.906647 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f200fecdaa03', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-07 00:58:52.236881', 'end': '2026-01-07 00:58:52.267109', 'delta': '0:00:00.030228', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f200fecdaa03'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-07 01:00:49.906662 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '51ab65509e83', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-07 00:58:52.771122', 'end': '2026-01-07 00:58:52.801676', 'delta': '0:00:00.030554', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['51ab65509e83'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-07 01:00:49.906666 | orchestrator | 2026-01-07 01:00:49.906670 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-07 01:00:49.906674 | orchestrator | Wednesday 07 January 2026 00:58:55 +0000 (0:00:00.208) 0:00:09.777 ***** 2026-01-07 01:00:49.906678 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:00:49.906682 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:00:49.906685 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:00:49.906689 | orchestrator | 2026-01-07 01:00:49.906693 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-07 01:00:49.906697 | orchestrator | Wednesday 07 January 2026 00:58:55 +0000 (0:00:00.443) 0:00:10.220 ***** 2026-01-07 01:00:49.906700 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-07 01:00:49.906704 | orchestrator | 2026-01-07 01:00:49.906708 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-07 01:00:49.906712 | orchestrator | Wednesday 07 January 2026 00:58:57 +0000 (0:00:01.649) 0:00:11.870 ***** 2026-01-07 01:00:49.906716 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.906719 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.906723 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.906727 | orchestrator | 2026-01-07 01:00:49.906731 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-07 01:00:49.906737 | orchestrator | Wednesday 07 January 2026 00:58:57 +0000 (0:00:00.310) 0:00:12.180 ***** 2026-01-07 01:00:49.906741 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.906745 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.906749 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.906752 | orchestrator | 2026-01-07 01:00:49.906756 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-07 01:00:49.906760 | orchestrator | Wednesday 07 January 2026 00:58:57 +0000 (0:00:00.430) 0:00:12.611 ***** 2026-01-07 01:00:49.906764 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.906767 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.906771 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.906775 | orchestrator | 2026-01-07 01:00:49.906779 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-07 01:00:49.906782 | orchestrator | Wednesday 07 January 2026 00:58:58 +0000 (0:00:00.554) 0:00:13.165 ***** 2026-01-07 01:00:49.906786 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:00:49.906790 | orchestrator | 2026-01-07 01:00:49.906794 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-07 01:00:49.906797 | orchestrator | Wednesday 07 January 2026 00:58:58 +0000 (0:00:00.138) 0:00:13.303 ***** 2026-01-07 01:00:49.906801 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.906805 | orchestrator | 2026-01-07 01:00:49.906809 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-07 01:00:49.906813 | orchestrator | Wednesday 07 January 2026 00:58:58 +0000 (0:00:00.238) 0:00:13.542 ***** 2026-01-07 01:00:49.906816 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.906820 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.906825 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.906829 | orchestrator | 2026-01-07 01:00:49.906833 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-07 01:00:49.906837 | orchestrator | Wednesday 07 January 2026 00:58:59 +0000 (0:00:00.290) 0:00:13.833 ***** 2026-01-07 01:00:49.906841 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.906844 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.906848 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.906852 | orchestrator | 2026-01-07 01:00:49.906856 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-07 01:00:49.906859 | orchestrator | Wednesday 07 January 2026 00:58:59 +0000 (0:00:00.356) 0:00:14.190 ***** 2026-01-07 01:00:49.906863 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.906867 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.906871 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.906874 | orchestrator | 2026-01-07 01:00:49.906878 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-07 01:00:49.906882 | orchestrator | Wednesday 07 January 2026 00:59:00 +0000 (0:00:00.619) 0:00:14.809 ***** 2026-01-07 01:00:49.906886 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.906889 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.906893 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.906897 | orchestrator | 2026-01-07 01:00:49.906901 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-07 01:00:49.906904 | orchestrator | Wednesday 07 January 2026 00:59:00 +0000 (0:00:00.357) 0:00:15.166 ***** 2026-01-07 01:00:49.906908 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.906912 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.906916 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.906919 | orchestrator | 2026-01-07 01:00:49.906923 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-07 01:00:49.906927 | orchestrator | Wednesday 07 January 2026 00:59:00 +0000 (0:00:00.329) 0:00:15.496 ***** 2026-01-07 01:00:49.906931 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.906935 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.906938 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.906955 | orchestrator | 2026-01-07 01:00:49.906959 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-07 01:00:49.906963 | orchestrator | Wednesday 07 January 2026 00:59:01 +0000 (0:00:00.326) 0:00:15.822 ***** 2026-01-07 01:00:49.906967 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.906970 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.906974 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.906978 | orchestrator | 2026-01-07 01:00:49.906981 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-07 01:00:49.906985 | orchestrator | Wednesday 07 January 2026 00:59:01 +0000 (0:00:00.587) 0:00:16.409 ***** 2026-01-07 01:00:49.906990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--29ea93ed--0a9a--5585--8fd4--59056229f60b-osd--block--29ea93ed--0a9a--5585--8fd4--59056229f60b', 'dm-uuid-LVM-fp2IefjU1GVqX3ZEIBT9uOVgnwN2u1638mEXPxJGefbIh85IScxE4Rx3rSoyFizJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.906994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6ed406c7--6b31--5121--9e07--a95f5a11b8c1-osd--block--6ed406c7--6b31--5121--9e07--a95f5a11b8c1', 'dm-uuid-LVM-FuzWdYFQkSMsW1lHpsMvBoq52G22660tRBUdJeAv1WMjgd3YBxiSYi5ipcrZRTVx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.906998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907032 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part1', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part14', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part15', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part16', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:00:49.907053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--29ea93ed--0a9a--5585--8fd4--59056229f60b-osd--block--29ea93ed--0a9a--5585--8fd4--59056229f60b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7RLssA-pWm9-MKY0-4SYs-vEi3-vzNl-qbfdEs', 'scsi-0QEMU_QEMU_HARDDISK_0dd21d7e-182d-4e2a-b2dc-5d8af31fa2ef', 'scsi-SQEMU_QEMU_HARDDISK_0dd21d7e-182d-4e2a-b2dc-5d8af31fa2ef'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:00:49.907069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6ed406c7--6b31--5121--9e07--a95f5a11b8c1-osd--block--6ed406c7--6b31--5121--9e07--a95f5a11b8c1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qoKY51-9C1a-6dz1-AENo-Jd2i-fccj-GewRhx', 'scsi-0QEMU_QEMU_HARDDISK_c52f0d9f-ed72-456f-8893-789cce9c22ff', 'scsi-SQEMU_QEMU_HARDDISK_c52f0d9f-ed72-456f-8893-789cce9c22ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:00:49.907074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17558d9b-0f92-44fa-9888-3d1d3136e2b9', 'scsi-SQEMU_QEMU_HARDDISK_17558d9b-0f92-44fa-9888-3d1d3136e2b9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:00:49.907078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:00:49.907082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b3967c5--6312--5066--b0c3--d93b1266106e-osd--block--0b3967c5--6312--5066--b0c3--d93b1266106e', 'dm-uuid-LVM-nSI3d8WfGayQZEMBqsvBSy4mN6nRtHcmWnDMWxoif45y5uGtb5FXrx1LpD8KATcd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f1de19d5--0a66--5bfe--890b--5e52c2bc57c1-osd--block--f1de19d5--0a66--5bfe--890b--5e52c2bc57c1', 'dm-uuid-LVM-Y7b4CkJzC6vDcNruRshPVijP2keHnjiYstnGQydZUpEeayicl26bW3ZIFILeWhPf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907092 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907120 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.907124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part1', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part14', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part15', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part16', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:00:49.907157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0b3967c5--6312--5066--b0c3--d93b1266106e-osd--block--0b3967c5--6312--5066--b0c3--d93b1266106e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HTC6J9-bFdj-gkdE-m0y6-kSm3-YMFi-s5ibVj', 'scsi-0QEMU_QEMU_HARDDISK_e8953730-7f10-4622-86b0-9bd54769baab', 'scsi-SQEMU_QEMU_HARDDISK_e8953730-7f10-4622-86b0-9bd54769baab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:00:49.907162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f1de19d5--0a66--5bfe--890b--5e52c2bc57c1-osd--block--f1de19d5--0a66--5bfe--890b--5e52c2bc57c1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-j2NvUc-fuSK-tsLs-Ivbq-z3fR-TUD9-TbpDys', 'scsi-0QEMU_QEMU_HARDDISK_2778d154-06c9-4d37-b4c8-396dcdd5fdf1', 'scsi-SQEMU_QEMU_HARDDISK_2778d154-06c9-4d37-b4c8-396dcdd5fdf1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:00:49.907166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f78b2b96-168b-421a-aa15-4bebe7f5a151', 'scsi-SQEMU_QEMU_HARDDISK_f78b2b96-168b-421a-aa15-4bebe7f5a151'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:00:49.907172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:00:49.907178 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.907182 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dee3f89e--6ecc--57ac--a128--7ff5a8885640-osd--block--dee3f89e--6ecc--57ac--a128--7ff5a8885640', 'dm-uuid-LVM-cp2mqFBfalJC3YyLofJNvbodHGGMGsPSNbpYNT19FlJ1MqcMaxxX2jWsXD76Bjlm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1079410--ca98--5ed2--be64--415d52b0d3f8-osd--block--c1079410--ca98--5ed2--be64--415d52b0d3f8', 'dm-uuid-LVM-ug8RvgmxgyB3TsUc6mDRhMl1zkcvTznbCpJ2n4ksy63KYghCeyECOLu5JTAtfGL8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational'2026-01-07 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:49.907194 | orchestrator | : '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-07 01:00:49.907248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part1', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part14', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part15', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part16', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:00:49.907253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--dee3f89e--6ecc--57ac--a128--7ff5a8885640-osd--block--dee3f89e--6ecc--57ac--a128--7ff5a8885640'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BCOpVX-WuWV-18wc-ofv4-NGYr-JZ0a-lNLkxw', 'scsi-0QEMU_QEMU_HARDDISK_6d387afb-e7b9-4a62-89e6-97c0cffa548c', 'scsi-SQEMU_QEMU_HARDDISK_6d387afb-e7b9-4a62-89e6-97c0cffa548c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:00:49.907258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c1079410--ca98--5ed2--be64--415d52b0d3f8-osd--block--c1079410--ca98--5ed2--be64--415d52b0d3f8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kRcGXZ-yQY8-AL7O-ugfG-2RUM-V8JJ-gPzG4q', 'scsi-0QEMU_QEMU_HARDDISK_995dcd08-654d-4bc0-ab24-70981ba073f5', 'scsi-SQEMU_QEMU_HARDDISK_995dcd08-654d-4bc0-ab24-70981ba073f5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:00:49.907265 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82b3532f-8ed6-4997-a6d4-62047998b4b8', 'scsi-SQEMU_QEMU_HARDDISK_82b3532f-8ed6-4997-a6d4-62047998b4b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:00:49.907273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-07 01:00:49.907277 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.907281 | orchestrator | 2026-01-07 01:00:49.907284 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-07 01:00:49.907288 | orchestrator | Wednesday 07 January 2026 00:59:02 +0000 (0:00:00.629) 0:00:17.039 ***** 2026-01-07 01:00:49.907292 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--29ea93ed--0a9a--5585--8fd4--59056229f60b-osd--block--29ea93ed--0a9a--5585--8fd4--59056229f60b', 'dm-uuid-LVM-fp2IefjU1GVqX3ZEIBT9uOVgnwN2u1638mEXPxJGefbIh85IScxE4Rx3rSoyFizJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907296 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6ed406c7--6b31--5121--9e07--a95f5a11b8c1-osd--block--6ed406c7--6b31--5121--9e07--a95f5a11b8c1', 'dm-uuid-LVM-FuzWdYFQkSMsW1lHpsMvBoq52G22660tRBUdJeAv1WMjgd3YBxiSYi5ipcrZRTVx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907304 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907310 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907314 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907322 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907329 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907335 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907342 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907354 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0b3967c5--6312--5066--b0c3--d93b1266106e-osd--block--0b3967c5--6312--5066--b0c3--d93b1266106e', 'dm-uuid-LVM-nSI3d8WfGayQZEMBqsvBSy4mN6nRtHcmWnDMWxoif45y5uGtb5FXrx1LpD8KATcd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907361 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907371 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f1de19d5--0a66--5bfe--890b--5e52c2bc57c1-osd--block--f1de19d5--0a66--5bfe--890b--5e52c2bc57c1', 'dm-uuid-LVM-Y7b4CkJzC6vDcNruRshPVijP2keHnjiYstnGQydZUpEeayicl26bW3ZIFILeWhPf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907381 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part1', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part14', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part15', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part16', 'scsi-SQEMU_QEMU_HARDDISK_5bec64c4-306d-48cb-b824-91c4511dbf67-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907391 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907402 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--29ea93ed--0a9a--5585--8fd4--59056229f60b-osd--block--29ea93ed--0a9a--5585--8fd4--59056229f60b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7RLssA-pWm9-MKY0-4SYs-vEi3-vzNl-qbfdEs', 'scsi-0QEMU_QEMU_HARDDISK_0dd21d7e-182d-4e2a-b2dc-5d8af31fa2ef', 'scsi-SQEMU_QEMU_HARDDISK_0dd21d7e-182d-4e2a-b2dc-5d8af31fa2ef'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907408 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907415 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6ed406c7--6b31--5121--9e07--a95f5a11b8c1-osd--block--6ed406c7--6b31--5121--9e07--a95f5a11b8c1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qoKY51-9C1a-6dz1-AENo-Jd2i-fccj-GewRhx', 'scsi-0QEMU_QEMU_HARDDISK_c52f0d9f-ed72-456f-8893-789cce9c22ff', 'scsi-SQEMU_QEMU_HARDDISK_c52f0d9f-ed72-456f-8893-789cce9c22ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907421 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907431 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17558d9b-0f92-44fa-9888-3d1d3136e2b9', 'scsi-SQEMU_QEMU_HARDDISK_17558d9b-0f92-44fa-9888-3d1d3136e2b9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907437 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907448 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907454 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907460 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.907506 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907522 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907532 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907545 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part1', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part14', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part15', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part16', 'scsi-SQEMU_QEMU_HARDDISK_12c08242-c6db-441e-a244-fd35f24986d7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907552 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dee3f89e--6ecc--57ac--a128--7ff5a8885640-osd--block--dee3f89e--6ecc--57ac--a128--7ff5a8885640', 'dm-uuid-LVM-cp2mqFBfalJC3YyLofJNvbodHGGMGsPSNbpYNT19FlJ1MqcMaxxX2jWsXD76Bjlm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907565 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0b3967c5--6312--5066--b0c3--d93b1266106e-osd--block--0b3967c5--6312--5066--b0c3--d93b1266106e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HTC6J9-bFdj-gkdE-m0y6-kSm3-YMFi-s5ibVj', 'scsi-0QEMU_QEMU_HARDDISK_e8953730-7f10-4622-86b0-9bd54769baab', 'scsi-SQEMU_QEMU_HARDDISK_e8953730-7f10-4622-86b0-9bd54769baab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907572 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1079410--ca98--5ed2--be64--415d52b0d3f8-osd--block--c1079410--ca98--5ed2--be64--415d52b0d3f8', 'dm-uuid-LVM-ug8RvgmxgyB3TsUc6mDRhMl1zkcvTznbCpJ2n4ksy63KYghCeyECOLu5JTAtfGL8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907583 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f1de19d5--0a66--5bfe--890b--5e52c2bc57c1-osd--block--f1de19d5--0a66--5bfe--890b--5e52c2bc57c1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-j2NvUc-fuSK-tsLs-Ivbq-z3fR-TUD9-TbpDys', 'scsi-0QEMU_QEMU_HARDDISK_2778d154-06c9-4d37-b4c8-396dcdd5fdf1', 'scsi-SQEMU_QEMU_HARDDISK_2778d154-06c9-4d37-b4c8-396dcdd5fdf1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907590 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907597 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f78b2b96-168b-421a-aa15-4bebe7f5a151', 'scsi-SQEMU_QEMU_HARDDISK_f78b2b96-168b-421a-aa15-4bebe7f5a151'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907607 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907617 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907623 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.907630 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907641 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907648 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907654 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907665 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907674 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907684 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part1', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part14', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part15', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part16', 'scsi-SQEMU_QEMU_HARDDISK_dbba5bc6-aee5-4aef-b91e-a976a83b6015-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907695 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--dee3f89e--6ecc--57ac--a128--7ff5a8885640-osd--block--dee3f89e--6ecc--57ac--a128--7ff5a8885640'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BCOpVX-WuWV-18wc-ofv4-NGYr-JZ0a-lNLkxw', 'scsi-0QEMU_QEMU_HARDDISK_6d387afb-e7b9-4a62-89e6-97c0cffa548c', 'scsi-SQEMU_QEMU_HARDDISK_6d387afb-e7b9-4a62-89e6-97c0cffa548c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907705 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c1079410--ca98--5ed2--be64--415d52b0d3f8-osd--block--c1079410--ca98--5ed2--be64--415d52b0d3f8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kRcGXZ-yQY8-AL7O-ugfG-2RUM-V8JJ-gPzG4q', 'scsi-0QEMU_QEMU_HARDDISK_995dcd08-654d-4bc0-ab24-70981ba073f5', 'scsi-SQEMU_QEMU_HARDDISK_995dcd08-654d-4bc0-ab24-70981ba073f5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907712 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82b3532f-8ed6-4997-a6d4-62047998b4b8', 'scsi-SQEMU_QEMU_HARDDISK_82b3532f-8ed6-4997-a6d4-62047998b4b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907723 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-07-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-07 01:00:49.907730 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.907737 | orchestrator | 2026-01-07 01:00:49.907743 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-07 01:00:49.907750 | orchestrator | Wednesday 07 January 2026 00:59:02 +0000 (0:00:00.608) 0:00:17.647 ***** 2026-01-07 01:00:49.907757 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:00:49.907763 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:00:49.907770 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:00:49.907779 | orchestrator | 2026-01-07 01:00:49.907786 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-07 01:00:49.907793 | orchestrator | Wednesday 07 January 2026 00:59:03 +0000 (0:00:00.656) 0:00:18.304 ***** 2026-01-07 01:00:49.907800 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:00:49.907806 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:00:49.907813 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:00:49.907820 | orchestrator | 2026-01-07 01:00:49.907827 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-07 01:00:49.907834 | orchestrator | Wednesday 07 January 2026 00:59:04 +0000 (0:00:00.571) 0:00:18.876 ***** 2026-01-07 01:00:49.907841 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:00:49.907848 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:00:49.907854 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:00:49.907860 | orchestrator | 2026-01-07 01:00:49.907867 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-07 01:00:49.907874 | orchestrator | Wednesday 07 January 2026 00:59:05 +0000 (0:00:01.569) 0:00:20.445 ***** 2026-01-07 01:00:49.907880 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.907887 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.907894 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.907901 | orchestrator | 2026-01-07 01:00:49.907908 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-07 01:00:49.907914 | orchestrator | Wednesday 07 January 2026 00:59:06 +0000 (0:00:00.326) 0:00:20.771 ***** 2026-01-07 01:00:49.907921 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.907928 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.907934 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.907941 | orchestrator | 2026-01-07 01:00:49.907948 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-07 01:00:49.907954 | orchestrator | Wednesday 07 January 2026 00:59:06 +0000 (0:00:00.415) 0:00:21.187 ***** 2026-01-07 01:00:49.907961 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.907968 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.907975 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.907982 | orchestrator | 2026-01-07 01:00:49.907990 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-07 01:00:49.907996 | orchestrator | Wednesday 07 January 2026 00:59:07 +0000 (0:00:00.518) 0:00:21.706 ***** 2026-01-07 01:00:49.908003 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-07 01:00:49.908009 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-07 01:00:49.908016 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-07 01:00:49.908023 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-07 01:00:49.908030 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-07 01:00:49.908036 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-07 01:00:49.908046 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-07 01:00:49.908053 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-07 01:00:49.908059 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-07 01:00:49.908066 | orchestrator | 2026-01-07 01:00:49.908073 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-07 01:00:49.908080 | orchestrator | Wednesday 07 January 2026 00:59:07 +0000 (0:00:00.944) 0:00:22.651 ***** 2026-01-07 01:00:49.908087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-07 01:00:49.908093 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-07 01:00:49.908100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-07 01:00:49.908106 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.908113 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-07 01:00:49.908120 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-07 01:00:49.908126 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-07 01:00:49.908137 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.908144 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-07 01:00:49.908151 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-07 01:00:49.908157 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-07 01:00:49.908163 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.908169 | orchestrator | 2026-01-07 01:00:49.908175 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-07 01:00:49.908182 | orchestrator | Wednesday 07 January 2026 00:59:08 +0000 (0:00:00.364) 0:00:23.015 ***** 2026-01-07 01:00:49.908189 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:00:49.908208 | orchestrator | 2026-01-07 01:00:49.908215 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-07 01:00:49.908227 | orchestrator | Wednesday 07 January 2026 00:59:09 +0000 (0:00:00.751) 0:00:23.766 ***** 2026-01-07 01:00:49.908235 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.908241 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.908247 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.908254 | orchestrator | 2026-01-07 01:00:49.908261 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-07 01:00:49.908268 | orchestrator | Wednesday 07 January 2026 00:59:09 +0000 (0:00:00.342) 0:00:24.109 ***** 2026-01-07 01:00:49.908274 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.908281 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.908288 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.908294 | orchestrator | 2026-01-07 01:00:49.908301 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-07 01:00:49.908309 | orchestrator | Wednesday 07 January 2026 00:59:09 +0000 (0:00:00.353) 0:00:24.463 ***** 2026-01-07 01:00:49.908315 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.908322 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.908329 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:00:49.908336 | orchestrator | 2026-01-07 01:00:49.908343 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-07 01:00:49.908351 | orchestrator | Wednesday 07 January 2026 00:59:10 +0000 (0:00:00.309) 0:00:24.772 ***** 2026-01-07 01:00:49.908359 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:00:49.908366 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:00:49.908374 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:00:49.908386 | orchestrator | 2026-01-07 01:00:49.908398 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-07 01:00:49.908411 | orchestrator | Wednesday 07 January 2026 00:59:11 +0000 (0:00:00.954) 0:00:25.726 ***** 2026-01-07 01:00:49.908424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 01:00:49.908439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 01:00:49.908452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 01:00:49.908460 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.908473 | orchestrator | 2026-01-07 01:00:49.908479 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-07 01:00:49.908486 | orchestrator | Wednesday 07 January 2026 00:59:11 +0000 (0:00:00.386) 0:00:26.113 ***** 2026-01-07 01:00:49.908493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 01:00:49.908499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 01:00:49.908506 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 01:00:49.908513 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.908520 | orchestrator | 2026-01-07 01:00:49.908526 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-07 01:00:49.908533 | orchestrator | Wednesday 07 January 2026 00:59:11 +0000 (0:00:00.441) 0:00:26.555 ***** 2026-01-07 01:00:49.908547 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-07 01:00:49.908554 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-07 01:00:49.908560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-07 01:00:49.908567 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.908573 | orchestrator | 2026-01-07 01:00:49.908579 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-07 01:00:49.908586 | orchestrator | Wednesday 07 January 2026 00:59:12 +0000 (0:00:00.405) 0:00:26.960 ***** 2026-01-07 01:00:49.908593 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:00:49.908600 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:00:49.908606 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:00:49.908613 | orchestrator | 2026-01-07 01:00:49.908619 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-07 01:00:49.908626 | orchestrator | Wednesday 07 January 2026 00:59:12 +0000 (0:00:00.337) 0:00:27.298 ***** 2026-01-07 01:00:49.908633 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-07 01:00:49.908643 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-07 01:00:49.908650 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-07 01:00:49.908656 | orchestrator | 2026-01-07 01:00:49.908663 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-07 01:00:49.908670 | orchestrator | Wednesday 07 January 2026 00:59:13 +0000 (0:00:00.530) 0:00:27.829 ***** 2026-01-07 01:00:49.908677 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 01:00:49.908683 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 01:00:49.908690 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 01:00:49.908697 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-07 01:00:49.908703 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-07 01:00:49.908710 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-07 01:00:49.908717 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-07 01:00:49.908723 | orchestrator | 2026-01-07 01:00:49.908730 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-07 01:00:49.908737 | orchestrator | Wednesday 07 January 2026 00:59:14 +0000 (0:00:01.069) 0:00:28.898 ***** 2026-01-07 01:00:49.908743 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-07 01:00:49.908750 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-07 01:00:49.908757 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-07 01:00:49.908763 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-07 01:00:49.908770 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-07 01:00:49.908780 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-07 01:00:49.908787 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-07 01:00:49.908794 | orchestrator | 2026-01-07 01:00:49.908800 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-07 01:00:49.908807 | orchestrator | Wednesday 07 January 2026 00:59:16 +0000 (0:00:02.124) 0:00:31.023 ***** 2026-01-07 01:00:49.908814 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:00:49.908820 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:00:49.908827 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-07 01:00:49.908833 | orchestrator | 2026-01-07 01:00:49.908840 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-07 01:00:49.908847 | orchestrator | Wednesday 07 January 2026 00:59:16 +0000 (0:00:00.450) 0:00:31.473 ***** 2026-01-07 01:00:49.908858 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 01:00:49.908866 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 01:00:49.908873 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 01:00:49.908880 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 01:00:49.908887 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-07 01:00:49.908894 | orchestrator | 2026-01-07 01:00:49.908901 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-07 01:00:49.908908 | orchestrator | Wednesday 07 January 2026 00:59:59 +0000 (0:00:42.585) 0:01:14.059 ***** 2026-01-07 01:00:49.908915 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.908921 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.908928 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.908935 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.908943 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.908949 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.908955 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-07 01:00:49.908962 | orchestrator | 2026-01-07 01:00:49.908969 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-07 01:00:49.908975 | orchestrator | Wednesday 07 January 2026 01:00:21 +0000 (0:00:21.675) 0:01:35.734 ***** 2026-01-07 01:00:49.908982 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.908988 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.908995 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.909002 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.909008 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.909015 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.909021 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-07 01:00:49.909028 | orchestrator | 2026-01-07 01:00:49.909035 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-07 01:00:49.909042 | orchestrator | Wednesday 07 January 2026 01:00:31 +0000 (0:00:10.861) 0:01:46.596 ***** 2026-01-07 01:00:49.909048 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.909059 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 01:00:49.909066 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 01:00:49.909073 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.909084 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 01:00:49.909091 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 01:00:49.909098 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.909105 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 01:00:49.909111 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 01:00:49.909118 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.909125 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 01:00:49.909131 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 01:00:49.909138 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.909144 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 01:00:49.909151 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 01:00:49.909157 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-07 01:00:49.909163 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-07 01:00:49.909169 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-07 01:00:49.909176 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-07 01:00:49.909182 | orchestrator | 2026-01-07 01:00:49.909189 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:00:49.909211 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-07 01:00:49.909219 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-07 01:00:49.909226 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-07 01:00:49.909233 | orchestrator | 2026-01-07 01:00:49.909240 | orchestrator | 2026-01-07 01:00:49.909247 | orchestrator | 2026-01-07 01:00:49.909253 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:00:49.909260 | orchestrator | Wednesday 07 January 2026 01:00:48 +0000 (0:00:16.926) 0:02:03.522 ***** 2026-01-07 01:00:49.909267 | orchestrator | =============================================================================== 2026-01-07 01:00:49.909274 | orchestrator | create openstack pool(s) ----------------------------------------------- 42.59s 2026-01-07 01:00:49.909280 | orchestrator | generate keys ---------------------------------------------------------- 21.68s 2026-01-07 01:00:49.909287 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.93s 2026-01-07 01:00:49.909294 | orchestrator | get keys from monitors ------------------------------------------------- 10.86s 2026-01-07 01:00:49.909301 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.13s 2026-01-07 01:00:49.909307 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.12s 2026-01-07 01:00:49.909314 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.65s 2026-01-07 01:00:49.909321 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 1.57s 2026-01-07 01:00:49.909331 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.07s 2026-01-07 01:00:49.909342 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.95s 2026-01-07 01:00:49.909349 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.94s 2026-01-07 01:00:49.909356 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.90s 2026-01-07 01:00:49.909363 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.80s 2026-01-07 01:00:49.909370 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.75s 2026-01-07 01:00:49.909377 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.70s 2026-01-07 01:00:49.909384 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.66s 2026-01-07 01:00:49.909391 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.66s 2026-01-07 01:00:49.909398 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.63s 2026-01-07 01:00:49.909404 | orchestrator | ceph-facts : Set_fact build devices from resolved symlinks -------------- 0.62s 2026-01-07 01:00:49.909410 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.61s 2026-01-07 01:00:52.953439 | orchestrator | 2026-01-07 01:00:52 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:52.955371 | orchestrator | 2026-01-07 01:00:52 | INFO  | Task c7d9371b-550b-4b0d-b67c-156f43706f59 is in state STARTED 2026-01-07 01:00:52.957458 | orchestrator | 2026-01-07 01:00:52 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:52.957526 | orchestrator | 2026-01-07 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:56.006080 | orchestrator | 2026-01-07 01:00:56 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:56.007604 | orchestrator | 2026-01-07 01:00:56 | INFO  | Task c7d9371b-550b-4b0d-b67c-156f43706f59 is in state STARTED 2026-01-07 01:00:56.009267 | orchestrator | 2026-01-07 01:00:56 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:56.009340 | orchestrator | 2026-01-07 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:00:59.063817 | orchestrator | 2026-01-07 01:00:59 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:00:59.070540 | orchestrator | 2026-01-07 01:00:59 | INFO  | Task c7d9371b-550b-4b0d-b67c-156f43706f59 is in state STARTED 2026-01-07 01:00:59.073443 | orchestrator | 2026-01-07 01:00:59 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:00:59.073489 | orchestrator | 2026-01-07 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:02.118110 | orchestrator | 2026-01-07 01:01:02 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:02.119800 | orchestrator | 2026-01-07 01:01:02 | INFO  | Task c7d9371b-550b-4b0d-b67c-156f43706f59 is in state STARTED 2026-01-07 01:01:02.122745 | orchestrator | 2026-01-07 01:01:02 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:01:02.122806 | orchestrator | 2026-01-07 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:05.159568 | orchestrator | 2026-01-07 01:01:05 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:05.164234 | orchestrator | 2026-01-07 01:01:05 | INFO  | Task c7d9371b-550b-4b0d-b67c-156f43706f59 is in state STARTED 2026-01-07 01:01:05.164763 | orchestrator | 2026-01-07 01:01:05 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state STARTED 2026-01-07 01:01:05.166887 | orchestrator | 2026-01-07 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:08.206805 | orchestrator | 2026-01-07 01:01:08 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:08.207287 | orchestrator | 2026-01-07 01:01:08 | INFO  | Task c7d9371b-550b-4b0d-b67c-156f43706f59 is in state STARTED 2026-01-07 01:01:08.209703 | orchestrator | 2026-01-07 01:01:08 | INFO  | Task 95e81063-1d75-4133-87cf-6fc81276ff9b is in state SUCCESS 2026-01-07 01:01:08.211593 | orchestrator | 2026-01-07 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:08.212516 | orchestrator | 2026-01-07 01:01:08.212548 | orchestrator | 2026-01-07 01:01:08.212556 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:01:08.212563 | orchestrator | 2026-01-07 01:01:08.212569 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:01:08.212577 | orchestrator | Wednesday 07 January 2026 00:59:34 +0000 (0:00:00.237) 0:00:00.237 ***** 2026-01-07 01:01:08.212583 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:01:08.212591 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:01:08.212597 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:01:08.212604 | orchestrator | 2026-01-07 01:01:08.212667 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:01:08.212681 | orchestrator | Wednesday 07 January 2026 00:59:34 +0000 (0:00:00.275) 0:00:00.512 ***** 2026-01-07 01:01:08.212746 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-07 01:01:08.212758 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-07 01:01:08.212764 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-07 01:01:08.212776 | orchestrator | 2026-01-07 01:01:08.212787 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-07 01:01:08.212799 | orchestrator | 2026-01-07 01:01:08.212810 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-07 01:01:08.212823 | orchestrator | Wednesday 07 January 2026 00:59:35 +0000 (0:00:00.361) 0:00:00.873 ***** 2026-01-07 01:01:08.212834 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:01:08.212844 | orchestrator | 2026-01-07 01:01:08.212975 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-07 01:01:08.212988 | orchestrator | Wednesday 07 January 2026 00:59:35 +0000 (0:00:00.453) 0:00:01.327 ***** 2026-01-07 01:01:08.213003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:01:08.213049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:01:08.213058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:01:08.213070 | orchestrator | 2026-01-07 01:01:08.213077 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-07 01:01:08.213083 | orchestrator | Wednesday 07 January 2026 00:59:36 +0000 (0:00:01.089) 0:00:02.416 ***** 2026-01-07 01:01:08.213090 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:01:08.213096 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:01:08.213102 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:01:08.213108 | orchestrator | 2026-01-07 01:01:08.213115 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-07 01:01:08.213121 | orchestrator | Wednesday 07 January 2026 00:59:37 +0000 (0:00:00.362) 0:00:02.778 ***** 2026-01-07 01:01:08.213133 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-07 01:01:08.213140 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-07 01:01:08.213147 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-07 01:01:08.213153 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-07 01:01:08.213180 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-07 01:01:08.213195 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-07 01:01:08.213207 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-07 01:01:08.213217 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-07 01:01:08.213227 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-07 01:01:08.213238 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-07 01:01:08.213246 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-07 01:01:08.213252 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-07 01:01:08.213259 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-07 01:01:08.213265 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-07 01:01:08.213271 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-07 01:01:08.213277 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-07 01:01:08.213283 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-07 01:01:08.213289 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-07 01:01:08.213296 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-07 01:01:08.213302 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-07 01:01:08.213308 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-07 01:01:08.213315 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-07 01:01:08.213332 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-07 01:01:08.213342 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-07 01:01:08.213353 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-07 01:01:08.213366 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-07 01:01:08.213377 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-07 01:01:08.213388 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-07 01:01:08.213398 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-07 01:01:08.213409 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-07 01:01:08.213416 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-07 01:01:08.213422 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-07 01:01:08.213429 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-07 01:01:08.213436 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-07 01:01:08.213443 | orchestrator | 2026-01-07 01:01:08.213449 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:01:08.213455 | orchestrator | Wednesday 07 January 2026 00:59:37 +0000 (0:00:00.649) 0:00:03.428 ***** 2026-01-07 01:01:08.213462 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:01:08.213468 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:01:08.213474 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:01:08.213480 | orchestrator | 2026-01-07 01:01:08.213487 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:01:08.213494 | orchestrator | Wednesday 07 January 2026 00:59:38 +0000 (0:00:00.293) 0:00:03.722 ***** 2026-01-07 01:01:08.213505 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.213512 | orchestrator | 2026-01-07 01:01:08.213519 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:01:08.213525 | orchestrator | Wednesday 07 January 2026 00:59:38 +0000 (0:00:00.113) 0:00:03.835 ***** 2026-01-07 01:01:08.213531 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.213538 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:08.213544 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:08.213550 | orchestrator | 2026-01-07 01:01:08.213556 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:01:08.213567 | orchestrator | Wednesday 07 January 2026 00:59:38 +0000 (0:00:00.393) 0:00:04.228 ***** 2026-01-07 01:01:08.213574 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:01:08.213580 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:01:08.213586 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:01:08.213593 | orchestrator | 2026-01-07 01:01:08.213600 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:01:08.213609 | orchestrator | Wednesday 07 January 2026 00:59:38 +0000 (0:00:00.338) 0:00:04.567 ***** 2026-01-07 01:01:08.213616 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.213629 | orchestrator | 2026-01-07 01:01:08.213636 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:01:08.213643 | orchestrator | Wednesday 07 January 2026 00:59:39 +0000 (0:00:00.129) 0:00:04.697 ***** 2026-01-07 01:01:08.213651 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.213659 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:08.213666 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:08.213673 | orchestrator | 2026-01-07 01:01:08.213684 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:01:08.213695 | orchestrator | Wednesday 07 January 2026 00:59:39 +0000 (0:00:00.306) 0:00:05.003 ***** 2026-01-07 01:01:08.213705 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:01:08.213715 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:01:08.213727 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:01:08.213737 | orchestrator | 2026-01-07 01:01:08.213747 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:01:08.213759 | orchestrator | Wednesday 07 January 2026 00:59:39 +0000 (0:00:00.331) 0:00:05.334 ***** 2026-01-07 01:01:08.213771 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.213782 | orchestrator | 2026-01-07 01:01:08.213793 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:01:08.213805 | orchestrator | Wednesday 07 January 2026 00:59:40 +0000 (0:00:00.405) 0:00:05.740 ***** 2026-01-07 01:01:08.213816 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.213828 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:08.213839 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:08.213850 | orchestrator | 2026-01-07 01:01:08.213861 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:01:08.213873 | orchestrator | Wednesday 07 January 2026 00:59:40 +0000 (0:00:00.306) 0:00:06.047 ***** 2026-01-07 01:01:08.213885 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:01:08.213896 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:01:08.213907 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:01:08.213915 | orchestrator | 2026-01-07 01:01:08.213923 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:01:08.213930 | orchestrator | Wednesday 07 January 2026 00:59:40 +0000 (0:00:00.331) 0:00:06.378 ***** 2026-01-07 01:01:08.213938 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.213946 | orchestrator | 2026-01-07 01:01:08.213954 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:01:08.213962 | orchestrator | Wednesday 07 January 2026 00:59:40 +0000 (0:00:00.121) 0:00:06.499 ***** 2026-01-07 01:01:08.213970 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.213976 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:08.213982 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:08.213988 | orchestrator | 2026-01-07 01:01:08.213995 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:01:08.214001 | orchestrator | Wednesday 07 January 2026 00:59:41 +0000 (0:00:00.298) 0:00:06.798 ***** 2026-01-07 01:01:08.214007 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:01:08.214057 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:01:08.214071 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:01:08.214082 | orchestrator | 2026-01-07 01:01:08.214094 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:01:08.214106 | orchestrator | Wednesday 07 January 2026 00:59:41 +0000 (0:00:00.567) 0:00:07.366 ***** 2026-01-07 01:01:08.214117 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.214128 | orchestrator | 2026-01-07 01:01:08.214136 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:01:08.214142 | orchestrator | Wednesday 07 January 2026 00:59:41 +0000 (0:00:00.127) 0:00:07.493 ***** 2026-01-07 01:01:08.214148 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.214154 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:08.214214 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:08.214222 | orchestrator | 2026-01-07 01:01:08.214235 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:01:08.214241 | orchestrator | Wednesday 07 January 2026 00:59:42 +0000 (0:00:00.316) 0:00:07.810 ***** 2026-01-07 01:01:08.214248 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:01:08.214254 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:01:08.214260 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:01:08.214266 | orchestrator | 2026-01-07 01:01:08.214273 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:01:08.214279 | orchestrator | Wednesday 07 January 2026 00:59:42 +0000 (0:00:00.346) 0:00:08.156 ***** 2026-01-07 01:01:08.214285 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.214291 | orchestrator | 2026-01-07 01:01:08.214298 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:01:08.214304 | orchestrator | Wednesday 07 January 2026 00:59:42 +0000 (0:00:00.132) 0:00:08.289 ***** 2026-01-07 01:01:08.214310 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.214316 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:08.214325 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:08.214336 | orchestrator | 2026-01-07 01:01:08.214347 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:01:08.214368 | orchestrator | Wednesday 07 January 2026 00:59:42 +0000 (0:00:00.306) 0:00:08.595 ***** 2026-01-07 01:01:08.214381 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:01:08.214391 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:01:08.214403 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:01:08.214414 | orchestrator | 2026-01-07 01:01:08.214426 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:01:08.214434 | orchestrator | Wednesday 07 January 2026 00:59:43 +0000 (0:00:00.606) 0:00:09.202 ***** 2026-01-07 01:01:08.214441 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.214447 | orchestrator | 2026-01-07 01:01:08.214458 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:01:08.214465 | orchestrator | Wednesday 07 January 2026 00:59:43 +0000 (0:00:00.134) 0:00:09.336 ***** 2026-01-07 01:01:08.214471 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.214477 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:08.214483 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:08.214489 | orchestrator | 2026-01-07 01:01:08.214495 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:01:08.214502 | orchestrator | Wednesday 07 January 2026 00:59:43 +0000 (0:00:00.317) 0:00:09.654 ***** 2026-01-07 01:01:08.214508 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:01:08.214514 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:01:08.214520 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:01:08.214526 | orchestrator | 2026-01-07 01:01:08.214533 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:01:08.214539 | orchestrator | Wednesday 07 January 2026 00:59:44 +0000 (0:00:00.348) 0:00:10.002 ***** 2026-01-07 01:01:08.214545 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.214551 | orchestrator | 2026-01-07 01:01:08.214557 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:01:08.214563 | orchestrator | Wednesday 07 January 2026 00:59:44 +0000 (0:00:00.133) 0:00:10.136 ***** 2026-01-07 01:01:08.214570 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.214576 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:08.214582 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:08.214588 | orchestrator | 2026-01-07 01:01:08.214594 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:01:08.214601 | orchestrator | Wednesday 07 January 2026 00:59:44 +0000 (0:00:00.312) 0:00:10.448 ***** 2026-01-07 01:01:08.214607 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:01:08.214613 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:01:08.214619 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:01:08.214625 | orchestrator | 2026-01-07 01:01:08.214631 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:01:08.214643 | orchestrator | Wednesday 07 January 2026 00:59:45 +0000 (0:00:00.649) 0:00:11.098 ***** 2026-01-07 01:01:08.214649 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.214656 | orchestrator | 2026-01-07 01:01:08.214665 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:01:08.214675 | orchestrator | Wednesday 07 January 2026 00:59:45 +0000 (0:00:00.146) 0:00:11.245 ***** 2026-01-07 01:01:08.214686 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.214695 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:08.214703 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:08.214712 | orchestrator | 2026-01-07 01:01:08.214721 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-07 01:01:08.214734 | orchestrator | Wednesday 07 January 2026 00:59:45 +0000 (0:00:00.326) 0:00:11.572 ***** 2026-01-07 01:01:08.214748 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:01:08.214756 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:01:08.214764 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:01:08.214773 | orchestrator | 2026-01-07 01:01:08.214782 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-07 01:01:08.214791 | orchestrator | Wednesday 07 January 2026 00:59:46 +0000 (0:00:00.342) 0:00:11.915 ***** 2026-01-07 01:01:08.214799 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.214807 | orchestrator | 2026-01-07 01:01:08.214815 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-07 01:01:08.214823 | orchestrator | Wednesday 07 January 2026 00:59:46 +0000 (0:00:00.152) 0:00:12.067 ***** 2026-01-07 01:01:08.214832 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.214844 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:08.214852 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:08.214861 | orchestrator | 2026-01-07 01:01:08.214870 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-07 01:01:08.214878 | orchestrator | Wednesday 07 January 2026 00:59:46 +0000 (0:00:00.599) 0:00:12.667 ***** 2026-01-07 01:01:08.214887 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:01:08.214896 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:01:08.214904 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:01:08.214914 | orchestrator | 2026-01-07 01:01:08.214923 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-07 01:01:08.214932 | orchestrator | Wednesday 07 January 2026 00:59:48 +0000 (0:00:01.876) 0:00:14.544 ***** 2026-01-07 01:01:08.214941 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-07 01:01:08.214950 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-07 01:01:08.214956 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-07 01:01:08.214961 | orchestrator | 2026-01-07 01:01:08.214967 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-07 01:01:08.214972 | orchestrator | Wednesday 07 January 2026 00:59:50 +0000 (0:00:01.888) 0:00:16.433 ***** 2026-01-07 01:01:08.214977 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-07 01:01:08.214984 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-07 01:01:08.214990 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-07 01:01:08.214995 | orchestrator | 2026-01-07 01:01:08.215008 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-07 01:01:08.215013 | orchestrator | Wednesday 07 January 2026 00:59:53 +0000 (0:00:02.589) 0:00:19.023 ***** 2026-01-07 01:01:08.215019 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-07 01:01:08.215024 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-07 01:01:08.215042 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-07 01:01:08.215047 | orchestrator | 2026-01-07 01:01:08.215053 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-07 01:01:08.215058 | orchestrator | Wednesday 07 January 2026 00:59:55 +0000 (0:00:02.262) 0:00:21.286 ***** 2026-01-07 01:01:08.215064 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.215069 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:08.215075 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:08.215080 | orchestrator | 2026-01-07 01:01:08.215086 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-07 01:01:08.215091 | orchestrator | Wednesday 07 January 2026 00:59:55 +0000 (0:00:00.347) 0:00:21.633 ***** 2026-01-07 01:01:08.215097 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.215102 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:08.215108 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:08.215113 | orchestrator | 2026-01-07 01:01:08.215118 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-07 01:01:08.215124 | orchestrator | Wednesday 07 January 2026 00:59:56 +0000 (0:00:00.322) 0:00:21.956 ***** 2026-01-07 01:01:08.215129 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:01:08.215135 | orchestrator | 2026-01-07 01:01:08.215140 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-07 01:01:08.215146 | orchestrator | Wednesday 07 January 2026 00:59:56 +0000 (0:00:00.733) 0:00:22.689 ***** 2026-01-07 01:01:08.215154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:01:08.215232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:01:08.215246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:01:08.215257 | orchestrator | 2026-01-07 01:01:08.215263 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-07 01:01:08.215268 | orchestrator | Wednesday 07 January 2026 00:59:58 +0000 (0:00:01.689) 0:00:24.378 ***** 2026-01-07 01:01:08.215284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 01:01:08.215291 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.215301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 01:01:08.215311 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:08.215321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 01:01:08.215327 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:08.215333 | orchestrator | 2026-01-07 01:01:08.215339 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-07 01:01:08.215345 | orchestrator | Wednesday 07 January 2026 00:59:59 +0000 (0:00:00.708) 0:00:25.087 ***** 2026-01-07 01:01:08.215358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 01:01:08.215369 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.215375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 01:01:08.215381 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:08.215395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-07 01:01:08.215406 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:08.215412 | orchestrator | 2026-01-07 01:01:08.215418 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-01-07 01:01:08.215423 | orchestrator | Wednesday 07 January 2026 01:00:00 +0000 (0:00:00.883) 0:00:25.970 ***** 2026-01-07 01:01:08.215429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:01:08.215457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:01:08.215465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-07 01:01:08.215475 | orchestrator | 2026-01-07 01:01:08.215481 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-07 01:01:08.215486 | orchestrator | Wednesday 07 January 2026 01:00:02 +0000 (0:00:01.878) 0:00:27.848 ***** 2026-01-07 01:01:08.215492 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:01:08.215497 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:01:08.215503 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:01:08.215508 | orchestrator | 2026-01-07 01:01:08.215522 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-07 01:01:08.215532 | orchestrator | Wednesday 07 January 2026 01:00:02 +0000 (0:00:00.323) 0:00:28.172 ***** 2026-01-07 01:01:08.215538 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:01:08.215543 | orchestrator | 2026-01-07 01:01:08.215550 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-07 01:01:08.215558 | orchestrator | Wednesday 07 January 2026 01:00:03 +0000 (0:00:00.565) 0:00:28.737 ***** 2026-01-07 01:01:08.215567 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:01:08.215575 | orchestrator | 2026-01-07 01:01:08.215583 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-07 01:01:08.215603 | orchestrator | Wednesday 07 January 2026 01:00:05 +0000 (0:00:02.650) 0:00:31.388 ***** 2026-01-07 01:01:08.215616 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:01:08.215623 | orchestrator | 2026-01-07 01:01:08.215633 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-07 01:01:08.215643 | orchestrator | Wednesday 07 January 2026 01:00:08 +0000 (0:00:02.641) 0:00:34.030 ***** 2026-01-07 01:01:08.215651 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:01:08.215660 | orchestrator | 2026-01-07 01:01:08.215668 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-07 01:01:08.215676 | orchestrator | Wednesday 07 January 2026 01:00:22 +0000 (0:00:13.721) 0:00:47.751 ***** 2026-01-07 01:01:08.215685 | orchestrator | 2026-01-07 01:01:08.215693 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-07 01:01:08.215701 | orchestrator | Wednesday 07 January 2026 01:00:22 +0000 (0:00:00.077) 0:00:47.828 ***** 2026-01-07 01:01:08.215709 | orchestrator | 2026-01-07 01:01:08.215717 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-07 01:01:08.215726 | orchestrator | Wednesday 07 January 2026 01:00:22 +0000 (0:00:00.077) 0:00:47.906 ***** 2026-01-07 01:01:08.215734 | orchestrator | 2026-01-07 01:01:08.215741 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-07 01:01:08.215749 | orchestrator | Wednesday 07 January 2026 01:00:22 +0000 (0:00:00.068) 0:00:47.974 ***** 2026-01-07 01:01:08.215757 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:01:08.215765 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:01:08.215774 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:01:08.215782 | orchestrator | 2026-01-07 01:01:08.215792 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:01:08.215801 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-07 01:01:08.215811 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-07 01:01:08.215827 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-07 01:01:08.215838 | orchestrator | 2026-01-07 01:01:08.215847 | orchestrator | 2026-01-07 01:01:08.215855 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:01:08.215865 | orchestrator | Wednesday 07 January 2026 01:01:05 +0000 (0:00:42.891) 0:01:30.865 ***** 2026-01-07 01:01:08.215874 | orchestrator | =============================================================================== 2026-01-07 01:01:08.215884 | orchestrator | horizon : Restart horizon container ------------------------------------ 42.89s 2026-01-07 01:01:08.215893 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 13.72s 2026-01-07 01:01:08.215904 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.65s 2026-01-07 01:01:08.215913 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.64s 2026-01-07 01:01:08.215924 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.59s 2026-01-07 01:01:08.215933 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.26s 2026-01-07 01:01:08.215943 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.89s 2026-01-07 01:01:08.215952 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.88s 2026-01-07 01:01:08.215960 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.88s 2026-01-07 01:01:08.215969 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.69s 2026-01-07 01:01:08.215979 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.09s 2026-01-07 01:01:08.215988 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.88s 2026-01-07 01:01:08.215998 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2026-01-07 01:01:08.216007 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.71s 2026-01-07 01:01:08.216017 | orchestrator | horizon : Update policy file name --------------------------------------- 0.65s 2026-01-07 01:01:08.216026 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2026-01-07 01:01:08.216035 | orchestrator | horizon : Update policy file name --------------------------------------- 0.61s 2026-01-07 01:01:08.216045 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.60s 2026-01-07 01:01:08.216054 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2026-01-07 01:01:08.216064 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2026-01-07 01:01:11.263239 | orchestrator | 2026-01-07 01:01:11 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:11.266676 | orchestrator | 2026-01-07 01:01:11 | INFO  | Task c7d9371b-550b-4b0d-b67c-156f43706f59 is in state STARTED 2026-01-07 01:01:11.266985 | orchestrator | 2026-01-07 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:14.316488 | orchestrator | 2026-01-07 01:01:14 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:14.317412 | orchestrator | 2026-01-07 01:01:14 | INFO  | Task c7d9371b-550b-4b0d-b67c-156f43706f59 is in state STARTED 2026-01-07 01:01:14.317685 | orchestrator | 2026-01-07 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:17.377164 | orchestrator | 2026-01-07 01:01:17 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:17.378624 | orchestrator | 2026-01-07 01:01:17 | INFO  | Task c7d9371b-550b-4b0d-b67c-156f43706f59 is in state STARTED 2026-01-07 01:01:17.378680 | orchestrator | 2026-01-07 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:20.428428 | orchestrator | 2026-01-07 01:01:20 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:20.430295 | orchestrator | 2026-01-07 01:01:20 | INFO  | Task c7d9371b-550b-4b0d-b67c-156f43706f59 is in state STARTED 2026-01-07 01:01:20.430353 | orchestrator | 2026-01-07 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:23.482801 | orchestrator | 2026-01-07 01:01:23 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:23.484429 | orchestrator | 2026-01-07 01:01:23 | INFO  | Task c7d9371b-550b-4b0d-b67c-156f43706f59 is in state STARTED 2026-01-07 01:01:23.484485 | orchestrator | 2026-01-07 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:26.537925 | orchestrator | 2026-01-07 01:01:26 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:26.539730 | orchestrator | 2026-01-07 01:01:26 | INFO  | Task c7d9371b-550b-4b0d-b67c-156f43706f59 is in state STARTED 2026-01-07 01:01:26.539811 | orchestrator | 2026-01-07 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:29.594057 | orchestrator | 2026-01-07 01:01:29 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:29.595672 | orchestrator | 2026-01-07 01:01:29 | INFO  | Task c7d9371b-550b-4b0d-b67c-156f43706f59 is in state SUCCESS 2026-01-07 01:01:29.597490 | orchestrator | 2026-01-07 01:01:29 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:01:29.597538 | orchestrator | 2026-01-07 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:32.635442 | orchestrator | 2026-01-07 01:01:32 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:32.637388 | orchestrator | 2026-01-07 01:01:32 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:01:32.637434 | orchestrator | 2026-01-07 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:35.680042 | orchestrator | 2026-01-07 01:01:35 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:35.681142 | orchestrator | 2026-01-07 01:01:35 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:01:35.681168 | orchestrator | 2026-01-07 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:38.721844 | orchestrator | 2026-01-07 01:01:38 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:38.722913 | orchestrator | 2026-01-07 01:01:38 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:01:38.722950 | orchestrator | 2026-01-07 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:41.767774 | orchestrator | 2026-01-07 01:01:41 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:41.767845 | orchestrator | 2026-01-07 01:01:41 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:01:41.767851 | orchestrator | 2026-01-07 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:44.808002 | orchestrator | 2026-01-07 01:01:44 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:44.808888 | orchestrator | 2026-01-07 01:01:44 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:01:44.808936 | orchestrator | 2026-01-07 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:47.851500 | orchestrator | 2026-01-07 01:01:47 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:47.854567 | orchestrator | 2026-01-07 01:01:47 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:01:47.854626 | orchestrator | 2026-01-07 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:50.894923 | orchestrator | 2026-01-07 01:01:50 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:50.895816 | orchestrator | 2026-01-07 01:01:50 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:01:50.895880 | orchestrator | 2026-01-07 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:53.948833 | orchestrator | 2026-01-07 01:01:53 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:53.950318 | orchestrator | 2026-01-07 01:01:53 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:01:53.950365 | orchestrator | 2026-01-07 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:01:57.011677 | orchestrator | 2026-01-07 01:01:57 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:01:57.014102 | orchestrator | 2026-01-07 01:01:57 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:01:57.014223 | orchestrator | 2026-01-07 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:00.082944 | orchestrator | 2026-01-07 01:02:00 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:02:00.083776 | orchestrator | 2026-01-07 01:02:00 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:02:00.083807 | orchestrator | 2026-01-07 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:03.150054 | orchestrator | 2026-01-07 01:02:03 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:02:03.151346 | orchestrator | 2026-01-07 01:02:03 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:02:03.151388 | orchestrator | 2026-01-07 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:06.205105 | orchestrator | 2026-01-07 01:02:06 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:02:06.206178 | orchestrator | 2026-01-07 01:02:06 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:02:06.206219 | orchestrator | 2026-01-07 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:09.252771 | orchestrator | 2026-01-07 01:02:09 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:02:09.254907 | orchestrator | 2026-01-07 01:02:09 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:02:09.254959 | orchestrator | 2026-01-07 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:12.299630 | orchestrator | 2026-01-07 01:02:12 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:02:12.300606 | orchestrator | 2026-01-07 01:02:12 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:02:12.300688 | orchestrator | 2026-01-07 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:15.346601 | orchestrator | 2026-01-07 01:02:15 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state STARTED 2026-01-07 01:02:15.348812 | orchestrator | 2026-01-07 01:02:15 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:02:15.348897 | orchestrator | 2026-01-07 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:18.384083 | orchestrator | 2026-01-07 01:02:18 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:02:18.386290 | orchestrator | 2026-01-07 01:02:18 | INFO  | Task d2944740-b1f4-4534-8b58-2f5d28dd6109 is in state SUCCESS 2026-01-07 01:02:18.386754 | orchestrator | 2026-01-07 01:02:18.386770 | orchestrator | 2026-01-07 01:02:18.386774 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-07 01:02:18.386778 | orchestrator | 2026-01-07 01:02:18.386781 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-07 01:02:18.386785 | orchestrator | Wednesday 07 January 2026 01:00:53 +0000 (0:00:00.154) 0:00:00.154 ***** 2026-01-07 01:02:18.386788 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-07 01:02:18.386792 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 01:02:18.386795 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 01:02:18.386798 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 01:02:18.386802 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 01:02:18.386805 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-07 01:02:18.386808 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-07 01:02:18.386817 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-07 01:02:18.386820 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-07 01:02:18.386824 | orchestrator | 2026-01-07 01:02:18.386827 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-07 01:02:18.386830 | orchestrator | Wednesday 07 January 2026 01:00:58 +0000 (0:00:04.720) 0:00:04.874 ***** 2026-01-07 01:02:18.386833 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-07 01:02:18.386836 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 01:02:18.386839 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 01:02:18.386842 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 01:02:18.386845 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-07 01:02:18.386848 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-07 01:02:18.386851 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-07 01:02:18.386854 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-07 01:02:18.386857 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-07 01:02:18.386860 | orchestrator | 2026-01-07 01:02:18.386863 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-07 01:02:18.386866 | orchestrator | Wednesday 07 January 2026 01:01:02 +0000 (0:00:03.943) 0:00:08.818 ***** 2026-01-07 01:02:18.386870 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-07 01:02:18.386873 | orchestrator | 2026-01-07 01:02:18.386876 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-07 01:02:18.386879 | orchestrator | Wednesday 07 January 2026 01:01:03 +0000 (0:00:01.000) 0:00:09.818 ***** 2026-01-07 01:02:18.386883 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-07 01:02:18.386886 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-07 01:02:18.386897 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-07 01:02:18.386900 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 01:02:18.386903 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-07 01:02:18.386906 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-07 01:02:18.386909 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-07 01:02:18.386912 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-07 01:02:18.386915 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-07 01:02:18.386918 | orchestrator | 2026-01-07 01:02:18.386921 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-07 01:02:18.386924 | orchestrator | Wednesday 07 January 2026 01:01:16 +0000 (0:00:13.071) 0:00:22.890 ***** 2026-01-07 01:02:18.386927 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-07 01:02:18.386931 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-07 01:02:18.386934 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-07 01:02:18.386937 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-07 01:02:18.386945 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-07 01:02:18.386948 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-07 01:02:18.386951 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-07 01:02:18.386954 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-07 01:02:18.386957 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-07 01:02:18.386960 | orchestrator | 2026-01-07 01:02:18.386963 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-07 01:02:18.386966 | orchestrator | Wednesday 07 January 2026 01:01:19 +0000 (0:00:02.969) 0:00:25.859 ***** 2026-01-07 01:02:18.386970 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-07 01:02:18.386975 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-07 01:02:18.386981 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-07 01:02:18.386988 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 01:02:18.386996 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-07 01:02:18.387035 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-07 01:02:18.387042 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-07 01:02:18.387048 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-07 01:02:18.387053 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-07 01:02:18.387059 | orchestrator | 2026-01-07 01:02:18.387065 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:02:18.387071 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:02:18.387076 | orchestrator | 2026-01-07 01:02:18.387079 | orchestrator | 2026-01-07 01:02:18.387082 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:02:18.387085 | orchestrator | Wednesday 07 January 2026 01:01:26 +0000 (0:00:06.859) 0:00:32.719 ***** 2026-01-07 01:02:18.387092 | orchestrator | =============================================================================== 2026-01-07 01:02:18.387095 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.07s 2026-01-07 01:02:18.387098 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.86s 2026-01-07 01:02:18.387101 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.72s 2026-01-07 01:02:18.387105 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.94s 2026-01-07 01:02:18.387108 | orchestrator | Check if target directories exist --------------------------------------- 2.97s 2026-01-07 01:02:18.387111 | orchestrator | Create share directory -------------------------------------------------- 1.00s 2026-01-07 01:02:18.387114 | orchestrator | 2026-01-07 01:02:18.388572 | orchestrator | 2026-01-07 01:02:18.388600 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:02:18.388606 | orchestrator | 2026-01-07 01:02:18.388612 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:02:18.388617 | orchestrator | Wednesday 07 January 2026 00:59:34 +0000 (0:00:00.226) 0:00:00.226 ***** 2026-01-07 01:02:18.388622 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:18.388628 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:18.388634 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:18.388639 | orchestrator | 2026-01-07 01:02:18.388644 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:02:18.388649 | orchestrator | Wednesday 07 January 2026 00:59:34 +0000 (0:00:00.275) 0:00:00.502 ***** 2026-01-07 01:02:18.388654 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-07 01:02:18.388660 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-07 01:02:18.388665 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-07 01:02:18.388670 | orchestrator | 2026-01-07 01:02:18.388674 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-07 01:02:18.388680 | orchestrator | 2026-01-07 01:02:18.388685 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 01:02:18.388690 | orchestrator | Wednesday 07 January 2026 00:59:35 +0000 (0:00:00.380) 0:00:00.882 ***** 2026-01-07 01:02:18.388696 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:02:18.388701 | orchestrator | 2026-01-07 01:02:18.388707 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-07 01:02:18.388712 | orchestrator | Wednesday 07 January 2026 00:59:35 +0000 (0:00:00.501) 0:00:01.383 ***** 2026-01-07 01:02:18.388720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:02:18.388734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:02:18.388754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:02:18.388761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:02:18.388767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:02:18.388772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:02:18.388778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:02:18.388788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:02:18.388794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:02:18.388799 | orchestrator | 2026-01-07 01:02:18.388805 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-07 01:02:18.388831 | orchestrator | Wednesday 07 January 2026 00:59:37 +0000 (0:00:01.653) 0:00:03.037 ***** 2026-01-07 01:02:18.388836 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:18.388842 | orchestrator | 2026-01-07 01:02:18.388847 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-07 01:02:18.388852 | orchestrator | Wednesday 07 January 2026 00:59:37 +0000 (0:00:00.118) 0:00:03.155 ***** 2026-01-07 01:02:18.388857 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:18.388860 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:18.388863 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:18.388866 | orchestrator | 2026-01-07 01:02:18.388869 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-07 01:02:18.388872 | orchestrator | Wednesday 07 January 2026 00:59:37 +0000 (0:00:00.369) 0:00:03.524 ***** 2026-01-07 01:02:18.388875 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:02:18.388878 | orchestrator | 2026-01-07 01:02:18.388881 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 01:02:18.388884 | orchestrator | Wednesday 07 January 2026 00:59:38 +0000 (0:00:00.781) 0:00:04.306 ***** 2026-01-07 01:02:18.388888 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:02:18.388891 | orchestrator | 2026-01-07 01:02:18.388894 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-07 01:02:18.388897 | orchestrator | Wednesday 07 January 2026 00:59:39 +0000 (0:00:00.525) 0:00:04.831 ***** 2026-01-07 01:02:18.388900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:02:18.388911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:02:18.388917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:02:18.389274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389303 | orchestrator | 2026-01-07 01:02:18.389306 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-07 01:02:18.389310 | orchestrator | Wednesday 07 January 2026 00:59:42 +0000 (0:00:03.513) 0:00:08.344 ***** 2026-01-07 01:02:18.389318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:02:18.389322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:02:18.389327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:02:18.389331 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:18.389336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:02:18.389339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:02:18.389346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:02:18.389349 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:18.389353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:02:18.389358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:02:18.389362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:02:18.389365 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:18.389368 | orchestrator | 2026-01-07 01:02:18.389371 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-07 01:02:18.389376 | orchestrator | Wednesday 07 January 2026 00:59:43 +0000 (0:00:00.763) 0:00:09.108 ***** 2026-01-07 01:02:18.389379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:02:18.389385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:02:18.389389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:02:18.389395 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:18.389398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:02:18.389402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:02:18.389407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:02:18.389410 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:18.389416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:02:18.389419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:02:18.389425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:02:18.389428 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:18.389431 | orchestrator | 2026-01-07 01:02:18.389434 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-07 01:02:18.389437 | orchestrator | Wednesday 07 January 2026 00:59:44 +0000 (0:00:00.832) 0:00:09.941 ***** 2026-01-07 01:02:18.389442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:02:18.389446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:02:18.389452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:02:18.389458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389486 | orchestrator | 2026-01-07 01:02:18.389489 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-07 01:02:18.389492 | orchestrator | Wednesday 07 January 2026 00:59:46 +0000 (0:00:02.810) 0:00:12.751 ***** 2026-01-07 01:02:18.389496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:02:18.389499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:02:18.389504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:02:18.389507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:02:18.389513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:02:18.389518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:02:18.389522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389533 | orchestrator | 2026-01-07 01:02:18.389536 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-07 01:02:18.389539 | orchestrator | Wednesday 07 January 2026 00:59:52 +0000 (0:00:05.620) 0:00:18.372 ***** 2026-01-07 01:02:18.389542 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:18.389546 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:02:18.389549 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:02:18.389552 | orchestrator | 2026-01-07 01:02:18.389555 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-07 01:02:18.389560 | orchestrator | Wednesday 07 January 2026 00:59:54 +0000 (0:00:01.568) 0:00:19.941 ***** 2026-01-07 01:02:18.389563 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:18.389566 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:18.389569 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:18.389572 | orchestrator | 2026-01-07 01:02:18.389575 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-07 01:02:18.389580 | orchestrator | Wednesday 07 January 2026 00:59:54 +0000 (0:00:00.640) 0:00:20.582 ***** 2026-01-07 01:02:18.389583 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:18.389587 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:18.389590 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:18.389593 | orchestrator | 2026-01-07 01:02:18.389596 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-07 01:02:18.389599 | orchestrator | Wednesday 07 January 2026 00:59:55 +0000 (0:00:00.311) 0:00:20.893 ***** 2026-01-07 01:02:18.389602 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:18.389605 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:18.389608 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:18.389611 | orchestrator | 2026-01-07 01:02:18.389614 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-07 01:02:18.389617 | orchestrator | Wednesday 07 January 2026 00:59:55 +0000 (0:00:00.590) 0:00:21.483 ***** 2026-01-07 01:02:18.389620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:02:18.389624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:02:18.389629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:02:18.389632 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:18.389635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:02:18.389643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:02:18.389647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:02:18.389650 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:18.389653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-07 01:02:18.389657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-07 01:02:18.389662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-07 01:02:18.389667 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:18.389670 | orchestrator | 2026-01-07 01:02:18.389673 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 01:02:18.389676 | orchestrator | Wednesday 07 January 2026 00:59:56 +0000 (0:00:00.607) 0:00:22.091 ***** 2026-01-07 01:02:18.389679 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:18.389682 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:18.389685 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:18.389688 | orchestrator | 2026-01-07 01:02:18.389701 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-07 01:02:18.389708 | orchestrator | Wednesday 07 January 2026 00:59:56 +0000 (0:00:00.290) 0:00:22.381 ***** 2026-01-07 01:02:18.389711 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-07 01:02:18.389716 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-07 01:02:18.389719 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-07 01:02:18.389722 | orchestrator | 2026-01-07 01:02:18.389725 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-07 01:02:18.389729 | orchestrator | Wednesday 07 January 2026 00:59:58 +0000 (0:00:01.826) 0:00:24.208 ***** 2026-01-07 01:02:18.389732 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:02:18.389735 | orchestrator | 2026-01-07 01:02:18.389738 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-07 01:02:18.389741 | orchestrator | Wednesday 07 January 2026 00:59:59 +0000 (0:00:00.973) 0:00:25.182 ***** 2026-01-07 01:02:18.389744 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:18.389747 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:18.389750 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:18.389753 | orchestrator | 2026-01-07 01:02:18.389756 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-07 01:02:18.389759 | orchestrator | Wednesday 07 January 2026 01:00:00 +0000 (0:00:01.000) 0:00:26.183 ***** 2026-01-07 01:02:18.389762 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-07 01:02:18.389765 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:02:18.389768 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-07 01:02:18.389771 | orchestrator | 2026-01-07 01:02:18.389775 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-07 01:02:18.389778 | orchestrator | Wednesday 07 January 2026 01:00:01 +0000 (0:00:01.480) 0:00:27.663 ***** 2026-01-07 01:02:18.389781 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:18.389784 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:18.389787 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:18.389790 | orchestrator | 2026-01-07 01:02:18.389793 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-07 01:02:18.389796 | orchestrator | Wednesday 07 January 2026 01:00:02 +0000 (0:00:00.348) 0:00:28.012 ***** 2026-01-07 01:02:18.389799 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-07 01:02:18.389803 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-07 01:02:18.389806 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-07 01:02:18.389809 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-07 01:02:18.389814 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-07 01:02:18.389817 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-07 01:02:18.389820 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-07 01:02:18.389823 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-07 01:02:18.389826 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-07 01:02:18.389829 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-07 01:02:18.389832 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-07 01:02:18.389835 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-07 01:02:18.389838 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-07 01:02:18.389841 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-07 01:02:18.389846 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-07 01:02:18.389849 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 01:02:18.389852 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 01:02:18.389855 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 01:02:18.389858 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 01:02:18.389861 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 01:02:18.389865 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 01:02:18.389868 | orchestrator | 2026-01-07 01:02:18.389871 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-07 01:02:18.389874 | orchestrator | Wednesday 07 January 2026 01:00:10 +0000 (0:00:08.502) 0:00:36.515 ***** 2026-01-07 01:02:18.389877 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 01:02:18.389880 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 01:02:18.389883 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 01:02:18.389887 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 01:02:18.389891 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 01:02:18.389897 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 01:02:18.389900 | orchestrator | 2026-01-07 01:02:18.389904 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-01-07 01:02:18.389908 | orchestrator | Wednesday 07 January 2026 01:00:13 +0000 (0:00:02.655) 0:00:39.171 ***** 2026-01-07 01:02:18.389912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:02:18.389918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:02:18.389924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-07 01:02:18.389928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-07 01:02:18.389958 | orchestrator | 2026-01-07 01:02:18.389961 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 01:02:18.389965 | orchestrator | Wednesday 07 January 2026 01:00:15 +0000 (0:00:02.263) 0:00:41.434 ***** 2026-01-07 01:02:18.389969 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:18.389972 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:18.389976 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:18.389979 | orchestrator | 2026-01-07 01:02:18.389983 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-07 01:02:18.389987 | orchestrator | Wednesday 07 January 2026 01:00:15 +0000 (0:00:00.305) 0:00:41.739 ***** 2026-01-07 01:02:18.389991 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:18.389994 | orchestrator | 2026-01-07 01:02:18.389998 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-07 01:02:18.390002 | orchestrator | Wednesday 07 January 2026 01:00:18 +0000 (0:00:02.148) 0:00:43.888 ***** 2026-01-07 01:02:18.390056 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:18.390062 | orchestrator | 2026-01-07 01:02:18.390067 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-07 01:02:18.390072 | orchestrator | Wednesday 07 January 2026 01:00:20 +0000 (0:00:02.077) 0:00:45.965 ***** 2026-01-07 01:02:18.390078 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:18.390082 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:18.390086 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:18.390089 | orchestrator | 2026-01-07 01:02:18.390093 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-07 01:02:18.390100 | orchestrator | Wednesday 07 January 2026 01:00:21 +0000 (0:00:01.032) 0:00:46.998 ***** 2026-01-07 01:02:18.390107 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:18.390110 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:18.390114 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:18.390117 | orchestrator | 2026-01-07 01:02:18.390166 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-07 01:02:18.390171 | orchestrator | Wednesday 07 January 2026 01:00:21 +0000 (0:00:00.335) 0:00:47.334 ***** 2026-01-07 01:02:18.390174 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:18.390178 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:18.390181 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:18.390185 | orchestrator | 2026-01-07 01:02:18.390189 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-07 01:02:18.390192 | orchestrator | Wednesday 07 January 2026 01:00:21 +0000 (0:00:00.335) 0:00:47.669 ***** 2026-01-07 01:02:18.390196 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:18.390199 | orchestrator | 2026-01-07 01:02:18.390203 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-01-07 01:02:18.390207 | orchestrator | Wednesday 07 January 2026 01:00:35 +0000 (0:00:13.794) 0:01:01.464 ***** 2026-01-07 01:02:18.390211 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:18.390214 | orchestrator | 2026-01-07 01:02:18.390218 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-07 01:02:18.390222 | orchestrator | Wednesday 07 January 2026 01:00:46 +0000 (0:00:11.145) 0:01:12.609 ***** 2026-01-07 01:02:18.390225 | orchestrator | 2026-01-07 01:02:18.390229 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-07 01:02:18.390233 | orchestrator | Wednesday 07 January 2026 01:00:46 +0000 (0:00:00.065) 0:01:12.675 ***** 2026-01-07 01:02:18.390237 | orchestrator | 2026-01-07 01:02:18.390244 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-07 01:02:18.390249 | orchestrator | Wednesday 07 January 2026 01:00:46 +0000 (0:00:00.066) 0:01:12.742 ***** 2026-01-07 01:02:18.390255 | orchestrator | 2026-01-07 01:02:18.390260 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-01-07 01:02:18.390267 | orchestrator | Wednesday 07 January 2026 01:00:47 +0000 (0:00:00.063) 0:01:12.805 ***** 2026-01-07 01:02:18.390272 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:18.390278 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:02:18.390283 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:02:18.390289 | orchestrator | 2026-01-07 01:02:18.390295 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-01-07 01:02:18.390301 | orchestrator | Wednesday 07 January 2026 01:00:58 +0000 (0:00:11.573) 0:01:24.379 ***** 2026-01-07 01:02:18.390308 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:18.390313 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:02:18.390318 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:02:18.390321 | orchestrator | 2026-01-07 01:02:18.390325 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-01-07 01:02:18.390328 | orchestrator | Wednesday 07 January 2026 01:01:07 +0000 (0:00:09.342) 0:01:33.721 ***** 2026-01-07 01:02:18.390331 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:18.390334 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:02:18.390337 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:02:18.390340 | orchestrator | 2026-01-07 01:02:18.390343 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 01:02:18.390346 | orchestrator | Wednesday 07 January 2026 01:01:19 +0000 (0:00:11.825) 0:01:45.546 ***** 2026-01-07 01:02:18.390349 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:02:18.390352 | orchestrator | 2026-01-07 01:02:18.390356 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-01-07 01:02:18.390359 | orchestrator | Wednesday 07 January 2026 01:01:20 +0000 (0:00:00.762) 0:01:46.309 ***** 2026-01-07 01:02:18.390365 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:02:18.390368 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:18.390371 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:02:18.390374 | orchestrator | 2026-01-07 01:02:18.390377 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-01-07 01:02:18.390380 | orchestrator | Wednesday 07 January 2026 01:01:21 +0000 (0:00:00.739) 0:01:47.049 ***** 2026-01-07 01:02:18.390384 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:02:18.390387 | orchestrator | 2026-01-07 01:02:18.390392 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-01-07 01:02:18.390395 | orchestrator | Wednesday 07 January 2026 01:01:22 +0000 (0:00:01.650) 0:01:48.699 ***** 2026-01-07 01:02:18.390399 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-01-07 01:02:18.390402 | orchestrator | 2026-01-07 01:02:18.390405 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-01-07 01:02:18.390408 | orchestrator | Wednesday 07 January 2026 01:01:35 +0000 (0:00:12.322) 0:02:01.022 ***** 2026-01-07 01:02:18.390411 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-01-07 01:02:18.390414 | orchestrator | 2026-01-07 01:02:18.390417 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-01-07 01:02:18.390420 | orchestrator | Wednesday 07 January 2026 01:02:04 +0000 (0:00:28.805) 0:02:29.827 ***** 2026-01-07 01:02:18.390423 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-01-07 01:02:18.390426 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-01-07 01:02:18.390430 | orchestrator | 2026-01-07 01:02:18.390433 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-01-07 01:02:18.390436 | orchestrator | Wednesday 07 January 2026 01:02:11 +0000 (0:00:07.046) 0:02:36.874 ***** 2026-01-07 01:02:18.390439 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:18.390442 | orchestrator | 2026-01-07 01:02:18.390445 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-01-07 01:02:18.390448 | orchestrator | Wednesday 07 January 2026 01:02:11 +0000 (0:00:00.132) 0:02:37.006 ***** 2026-01-07 01:02:18.390451 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:18.390454 | orchestrator | 2026-01-07 01:02:18.390461 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-01-07 01:02:18.390464 | orchestrator | Wednesday 07 January 2026 01:02:11 +0000 (0:00:00.160) 0:02:37.167 ***** 2026-01-07 01:02:18.390467 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:18.390470 | orchestrator | 2026-01-07 01:02:18.390473 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-01-07 01:02:18.390476 | orchestrator | Wednesday 07 January 2026 01:02:11 +0000 (0:00:00.133) 0:02:37.301 ***** 2026-01-07 01:02:18.390479 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:18.390482 | orchestrator | 2026-01-07 01:02:18.390485 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-01-07 01:02:18.390488 | orchestrator | Wednesday 07 January 2026 01:02:12 +0000 (0:00:00.502) 0:02:37.804 ***** 2026-01-07 01:02:18.390492 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:02:18.390495 | orchestrator | 2026-01-07 01:02:18.390498 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-07 01:02:18.390501 | orchestrator | Wednesday 07 January 2026 01:02:16 +0000 (0:00:04.348) 0:02:42.152 ***** 2026-01-07 01:02:18.390504 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:02:18.390507 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:02:18.390510 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:02:18.390513 | orchestrator | 2026-01-07 01:02:18.390516 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:02:18.390519 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-07 01:02:18.390523 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-07 01:02:18.390528 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-07 01:02:18.390531 | orchestrator | 2026-01-07 01:02:18.390534 | orchestrator | 2026-01-07 01:02:18.390537 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:02:18.390540 | orchestrator | Wednesday 07 January 2026 01:02:16 +0000 (0:00:00.405) 0:02:42.558 ***** 2026-01-07 01:02:18.390543 | orchestrator | =============================================================================== 2026-01-07 01:02:18.390546 | orchestrator | service-ks-register : keystone | Creating services --------------------- 28.81s 2026-01-07 01:02:18.390549 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.79s 2026-01-07 01:02:18.390552 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.32s 2026-01-07 01:02:18.390555 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.83s 2026-01-07 01:02:18.390559 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 11.57s 2026-01-07 01:02:18.390562 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.15s 2026-01-07 01:02:18.390565 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.34s 2026-01-07 01:02:18.390568 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.50s 2026-01-07 01:02:18.390571 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.05s 2026-01-07 01:02:18.390574 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.62s 2026-01-07 01:02:18.390577 | orchestrator | keystone : Creating default user role ----------------------------------- 4.35s 2026-01-07 01:02:18.390580 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.51s 2026-01-07 01:02:18.390583 | orchestrator | keystone : Copying over config.json files for services ------------------ 2.81s 2026-01-07 01:02:18.390586 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.66s 2026-01-07 01:02:18.390589 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.26s 2026-01-07 01:02:18.390594 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.15s 2026-01-07 01:02:18.390597 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.08s 2026-01-07 01:02:18.390600 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.83s 2026-01-07 01:02:18.390603 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.65s 2026-01-07 01:02:18.390606 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.65s 2026-01-07 01:02:18.390609 | orchestrator | 2026-01-07 01:02:18 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:02:18.390612 | orchestrator | 2026-01-07 01:02:18 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:02:18.390616 | orchestrator | 2026-01-07 01:02:18 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:02:18.390986 | orchestrator | 2026-01-07 01:02:18 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:02:18.391090 | orchestrator | 2026-01-07 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:21.421391 | orchestrator | 2026-01-07 01:02:21 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:02:21.425214 | orchestrator | 2026-01-07 01:02:21 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:02:21.427293 | orchestrator | 2026-01-07 01:02:21 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:02:21.427610 | orchestrator | 2026-01-07 01:02:21 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:02:21.428213 | orchestrator | 2026-01-07 01:02:21 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:02:21.428235 | orchestrator | 2026-01-07 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:24.452709 | orchestrator | 2026-01-07 01:02:24 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:02:24.453941 | orchestrator | 2026-01-07 01:02:24 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:02:24.454677 | orchestrator | 2026-01-07 01:02:24 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state STARTED 2026-01-07 01:02:24.456093 | orchestrator | 2026-01-07 01:02:24 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:02:24.457413 | orchestrator | 2026-01-07 01:02:24 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:02:24.458645 | orchestrator | 2026-01-07 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:27.493873 | orchestrator | 2026-01-07 01:02:27 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:02:27.493924 | orchestrator | 2026-01-07 01:02:27 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:02:27.497162 | orchestrator | 2026-01-07 01:02:27 | INFO  | Task b09093c6-c4a2-4b3f-a7f9-d30c28baa636 is in state SUCCESS 2026-01-07 01:02:27.500250 | orchestrator | 2026-01-07 01:02:27 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:02:27.502786 | orchestrator | 2026-01-07 01:02:27 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:02:27.504517 | orchestrator | 2026-01-07 01:02:27 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:02:27.504571 | orchestrator | 2026-01-07 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:30.542191 | orchestrator | 2026-01-07 01:02:30 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:02:30.545146 | orchestrator | 2026-01-07 01:02:30 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:02:30.549073 | orchestrator | 2026-01-07 01:02:30 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:02:30.550914 | orchestrator | 2026-01-07 01:02:30 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:02:30.552445 | orchestrator | 2026-01-07 01:02:30 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:02:30.552493 | orchestrator | 2026-01-07 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:33.597455 | orchestrator | 2026-01-07 01:02:33 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:02:33.599357 | orchestrator | 2026-01-07 01:02:33 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:02:33.602233 | orchestrator | 2026-01-07 01:02:33 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:02:33.604269 | orchestrator | 2026-01-07 01:02:33 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:02:33.606282 | orchestrator | 2026-01-07 01:02:33 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:02:33.606338 | orchestrator | 2026-01-07 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:36.655625 | orchestrator | 2026-01-07 01:02:36 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:02:36.658196 | orchestrator | 2026-01-07 01:02:36 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:02:36.661681 | orchestrator | 2026-01-07 01:02:36 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:02:36.663284 | orchestrator | 2026-01-07 01:02:36 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:02:36.664768 | orchestrator | 2026-01-07 01:02:36 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:02:36.664808 | orchestrator | 2026-01-07 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:39.709588 | orchestrator | 2026-01-07 01:02:39 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:02:39.712463 | orchestrator | 2026-01-07 01:02:39 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:02:39.714697 | orchestrator | 2026-01-07 01:02:39 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:02:39.716748 | orchestrator | 2026-01-07 01:02:39 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:02:39.718577 | orchestrator | 2026-01-07 01:02:39 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:02:39.718620 | orchestrator | 2026-01-07 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:42.769974 | orchestrator | 2026-01-07 01:02:42 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:02:42.772610 | orchestrator | 2026-01-07 01:02:42 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:02:42.774177 | orchestrator | 2026-01-07 01:02:42 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:02:42.775874 | orchestrator | 2026-01-07 01:02:42 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:02:42.778629 | orchestrator | 2026-01-07 01:02:42 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:02:42.778715 | orchestrator | 2026-01-07 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:45.817364 | orchestrator | 2026-01-07 01:02:45 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:02:45.819756 | orchestrator | 2026-01-07 01:02:45 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:02:45.822061 | orchestrator | 2026-01-07 01:02:45 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:02:45.823413 | orchestrator | 2026-01-07 01:02:45 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:02:45.825357 | orchestrator | 2026-01-07 01:02:45 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:02:45.825397 | orchestrator | 2026-01-07 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:48.866852 | orchestrator | 2026-01-07 01:02:48 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:02:48.866922 | orchestrator | 2026-01-07 01:02:48 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:02:48.867747 | orchestrator | 2026-01-07 01:02:48 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:02:48.868364 | orchestrator | 2026-01-07 01:02:48 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:02:48.870127 | orchestrator | 2026-01-07 01:02:48 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:02:48.870192 | orchestrator | 2026-01-07 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:51.912569 | orchestrator | 2026-01-07 01:02:51 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:02:51.914695 | orchestrator | 2026-01-07 01:02:51 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:02:51.915996 | orchestrator | 2026-01-07 01:02:51 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:02:51.917565 | orchestrator | 2026-01-07 01:02:51 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:02:51.918642 | orchestrator | 2026-01-07 01:02:51 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:02:51.918810 | orchestrator | 2026-01-07 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:54.960111 | orchestrator | 2026-01-07 01:02:54 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:02:54.962202 | orchestrator | 2026-01-07 01:02:54 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:02:54.962324 | orchestrator | 2026-01-07 01:02:54 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:02:54.964264 | orchestrator | 2026-01-07 01:02:54 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:02:54.965835 | orchestrator | 2026-01-07 01:02:54 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:02:54.965883 | orchestrator | 2026-01-07 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:02:58.007423 | orchestrator | 2026-01-07 01:02:58 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:02:58.008888 | orchestrator | 2026-01-07 01:02:58 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:02:58.011152 | orchestrator | 2026-01-07 01:02:58 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:02:58.012176 | orchestrator | 2026-01-07 01:02:58 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:02:58.015973 | orchestrator | 2026-01-07 01:02:58 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:02:58.016048 | orchestrator | 2026-01-07 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:01.055475 | orchestrator | 2026-01-07 01:03:01 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:01.057142 | orchestrator | 2026-01-07 01:03:01 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:01.057890 | orchestrator | 2026-01-07 01:03:01 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:01.059684 | orchestrator | 2026-01-07 01:03:01 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:01.060559 | orchestrator | 2026-01-07 01:03:01 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:01.060590 | orchestrator | 2026-01-07 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:04.099030 | orchestrator | 2026-01-07 01:03:04 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:04.101354 | orchestrator | 2026-01-07 01:03:04 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:04.102614 | orchestrator | 2026-01-07 01:03:04 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:04.107307 | orchestrator | 2026-01-07 01:03:04 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:04.107380 | orchestrator | 2026-01-07 01:03:04 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:04.107390 | orchestrator | 2026-01-07 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:07.144003 | orchestrator | 2026-01-07 01:03:07 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:07.144927 | orchestrator | 2026-01-07 01:03:07 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:07.146069 | orchestrator | 2026-01-07 01:03:07 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:07.146993 | orchestrator | 2026-01-07 01:03:07 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:07.148107 | orchestrator | 2026-01-07 01:03:07 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:07.148132 | orchestrator | 2026-01-07 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:10.182355 | orchestrator | 2026-01-07 01:03:10 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:10.185143 | orchestrator | 2026-01-07 01:03:10 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:10.186225 | orchestrator | 2026-01-07 01:03:10 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:10.186779 | orchestrator | 2026-01-07 01:03:10 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:10.187537 | orchestrator | 2026-01-07 01:03:10 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:10.187696 | orchestrator | 2026-01-07 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:13.212660 | orchestrator | 2026-01-07 01:03:13 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:13.213417 | orchestrator | 2026-01-07 01:03:13 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:13.214322 | orchestrator | 2026-01-07 01:03:13 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:13.215238 | orchestrator | 2026-01-07 01:03:13 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:13.217388 | orchestrator | 2026-01-07 01:03:13 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:13.217432 | orchestrator | 2026-01-07 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:16.248256 | orchestrator | 2026-01-07 01:03:16 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:16.248582 | orchestrator | 2026-01-07 01:03:16 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:16.249329 | orchestrator | 2026-01-07 01:03:16 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:16.249765 | orchestrator | 2026-01-07 01:03:16 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:16.250233 | orchestrator | 2026-01-07 01:03:16 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:16.250246 | orchestrator | 2026-01-07 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:19.279387 | orchestrator | 2026-01-07 01:03:19 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:19.279461 | orchestrator | 2026-01-07 01:03:19 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:19.279471 | orchestrator | 2026-01-07 01:03:19 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:19.279478 | orchestrator | 2026-01-07 01:03:19 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:19.279485 | orchestrator | 2026-01-07 01:03:19 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:19.279490 | orchestrator | 2026-01-07 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:22.308820 | orchestrator | 2026-01-07 01:03:22 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:22.309218 | orchestrator | 2026-01-07 01:03:22 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:22.309976 | orchestrator | 2026-01-07 01:03:22 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:22.310557 | orchestrator | 2026-01-07 01:03:22 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:22.311757 | orchestrator | 2026-01-07 01:03:22 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:22.311867 | orchestrator | 2026-01-07 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:25.370104 | orchestrator | 2026-01-07 01:03:25 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:25.370640 | orchestrator | 2026-01-07 01:03:25 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:25.371689 | orchestrator | 2026-01-07 01:03:25 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:25.372615 | orchestrator | 2026-01-07 01:03:25 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:25.373695 | orchestrator | 2026-01-07 01:03:25 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:25.373759 | orchestrator | 2026-01-07 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:28.406488 | orchestrator | 2026-01-07 01:03:28 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:28.406577 | orchestrator | 2026-01-07 01:03:28 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:28.406587 | orchestrator | 2026-01-07 01:03:28 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:28.406594 | orchestrator | 2026-01-07 01:03:28 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:28.406601 | orchestrator | 2026-01-07 01:03:28 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:28.406609 | orchestrator | 2026-01-07 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:31.433949 | orchestrator | 2026-01-07 01:03:31 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:31.434093 | orchestrator | 2026-01-07 01:03:31 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:31.434544 | orchestrator | 2026-01-07 01:03:31 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:31.435132 | orchestrator | 2026-01-07 01:03:31 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:31.435771 | orchestrator | 2026-01-07 01:03:31 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:31.438124 | orchestrator | 2026-01-07 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:34.457247 | orchestrator | 2026-01-07 01:03:34 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:34.457713 | orchestrator | 2026-01-07 01:03:34 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:34.458502 | orchestrator | 2026-01-07 01:03:34 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:34.459289 | orchestrator | 2026-01-07 01:03:34 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:34.460201 | orchestrator | 2026-01-07 01:03:34 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:34.460252 | orchestrator | 2026-01-07 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:37.489413 | orchestrator | 2026-01-07 01:03:37 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:37.491263 | orchestrator | 2026-01-07 01:03:37 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:37.492928 | orchestrator | 2026-01-07 01:03:37 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:37.494254 | orchestrator | 2026-01-07 01:03:37 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:37.495610 | orchestrator | 2026-01-07 01:03:37 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:37.495652 | orchestrator | 2026-01-07 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:40.531006 | orchestrator | 2026-01-07 01:03:40 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:40.532188 | orchestrator | 2026-01-07 01:03:40 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:40.534596 | orchestrator | 2026-01-07 01:03:40 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:40.536285 | orchestrator | 2026-01-07 01:03:40 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:40.537667 | orchestrator | 2026-01-07 01:03:40 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:40.537736 | orchestrator | 2026-01-07 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:43.567556 | orchestrator | 2026-01-07 01:03:43 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:43.569906 | orchestrator | 2026-01-07 01:03:43 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:43.572261 | orchestrator | 2026-01-07 01:03:43 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:43.574651 | orchestrator | 2026-01-07 01:03:43 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:43.577437 | orchestrator | 2026-01-07 01:03:43 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:43.577814 | orchestrator | 2026-01-07 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:46.604863 | orchestrator | 2026-01-07 01:03:46 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:46.606893 | orchestrator | 2026-01-07 01:03:46 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:46.609004 | orchestrator | 2026-01-07 01:03:46 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:46.611013 | orchestrator | 2026-01-07 01:03:46 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:46.612485 | orchestrator | 2026-01-07 01:03:46 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:46.612531 | orchestrator | 2026-01-07 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:49.648330 | orchestrator | 2026-01-07 01:03:49 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:49.649167 | orchestrator | 2026-01-07 01:03:49 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:49.650927 | orchestrator | 2026-01-07 01:03:49 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:49.652325 | orchestrator | 2026-01-07 01:03:49 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:49.653004 | orchestrator | 2026-01-07 01:03:49 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:49.653067 | orchestrator | 2026-01-07 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:52.689837 | orchestrator | 2026-01-07 01:03:52 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:52.690371 | orchestrator | 2026-01-07 01:03:52 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:52.691077 | orchestrator | 2026-01-07 01:03:52 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:52.693321 | orchestrator | 2026-01-07 01:03:52 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:52.693917 | orchestrator | 2026-01-07 01:03:52 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:52.694042 | orchestrator | 2026-01-07 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:55.721239 | orchestrator | 2026-01-07 01:03:55 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:55.721304 | orchestrator | 2026-01-07 01:03:55 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:55.721314 | orchestrator | 2026-01-07 01:03:55 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:55.721912 | orchestrator | 2026-01-07 01:03:55 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:55.723267 | orchestrator | 2026-01-07 01:03:55 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:55.723293 | orchestrator | 2026-01-07 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:03:58.755834 | orchestrator | 2026-01-07 01:03:58 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:03:58.756561 | orchestrator | 2026-01-07 01:03:58 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:03:58.758583 | orchestrator | 2026-01-07 01:03:58 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:03:58.762064 | orchestrator | 2026-01-07 01:03:58 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:03:58.764306 | orchestrator | 2026-01-07 01:03:58 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:03:58.764382 | orchestrator | 2026-01-07 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:01.793223 | orchestrator | 2026-01-07 01:04:01 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:01.793438 | orchestrator | 2026-01-07 01:04:01 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:01.794327 | orchestrator | 2026-01-07 01:04:01 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:04:01.795089 | orchestrator | 2026-01-07 01:04:01 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:04:01.795749 | orchestrator | 2026-01-07 01:04:01 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:01.795784 | orchestrator | 2026-01-07 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:04.824422 | orchestrator | 2026-01-07 01:04:04 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:04.825396 | orchestrator | 2026-01-07 01:04:04 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:04.825960 | orchestrator | 2026-01-07 01:04:04 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:04:04.826761 | orchestrator | 2026-01-07 01:04:04 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state STARTED 2026-01-07 01:04:04.827908 | orchestrator | 2026-01-07 01:04:04 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:04.827941 | orchestrator | 2026-01-07 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:07.865213 | orchestrator | 2026-01-07 01:04:07 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:07.865640 | orchestrator | 2026-01-07 01:04:07 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:07.866195 | orchestrator | 2026-01-07 01:04:07 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:04:07.867293 | orchestrator | 2026-01-07 01:04:07 | INFO  | Task 678a9f9d-fecf-4a15-9ded-87858c1a005b is in state SUCCESS 2026-01-07 01:04:07.867459 | orchestrator | 2026-01-07 01:04:07.867472 | orchestrator | 2026-01-07 01:04:07.867477 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-07 01:04:07.867481 | orchestrator | 2026-01-07 01:04:07.867485 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-07 01:04:07.867490 | orchestrator | Wednesday 07 January 2026 01:01:30 +0000 (0:00:00.191) 0:00:00.191 ***** 2026-01-07 01:04:07.867494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-07 01:04:07.867499 | orchestrator | 2026-01-07 01:04:07.867503 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-07 01:04:07.867507 | orchestrator | Wednesday 07 January 2026 01:01:30 +0000 (0:00:00.202) 0:00:00.393 ***** 2026-01-07 01:04:07.867512 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-07 01:04:07.867516 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-07 01:04:07.867520 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-07 01:04:07.867524 | orchestrator | 2026-01-07 01:04:07.867528 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-07 01:04:07.867533 | orchestrator | Wednesday 07 January 2026 01:01:31 +0000 (0:00:01.060) 0:00:01.454 ***** 2026-01-07 01:04:07.867539 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-07 01:04:07.867549 | orchestrator | 2026-01-07 01:04:07.867555 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-07 01:04:07.867561 | orchestrator | Wednesday 07 January 2026 01:01:33 +0000 (0:00:01.259) 0:00:02.713 ***** 2026-01-07 01:04:07.867567 | orchestrator | changed: [testbed-manager] 2026-01-07 01:04:07.867574 | orchestrator | 2026-01-07 01:04:07.867580 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-07 01:04:07.867585 | orchestrator | Wednesday 07 January 2026 01:01:33 +0000 (0:00:00.768) 0:00:03.482 ***** 2026-01-07 01:04:07.867611 | orchestrator | changed: [testbed-manager] 2026-01-07 01:04:07.867617 | orchestrator | 2026-01-07 01:04:07.867623 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-07 01:04:07.867629 | orchestrator | Wednesday 07 January 2026 01:01:34 +0000 (0:00:00.911) 0:00:04.394 ***** 2026-01-07 01:04:07.867636 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-07 01:04:07.867642 | orchestrator | ok: [testbed-manager] 2026-01-07 01:04:07.867649 | orchestrator | 2026-01-07 01:04:07.867655 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-07 01:04:07.867662 | orchestrator | Wednesday 07 January 2026 01:02:15 +0000 (0:00:41.082) 0:00:45.477 ***** 2026-01-07 01:04:07.867668 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-07 01:04:07.867672 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-07 01:04:07.867676 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-07 01:04:07.867680 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-07 01:04:07.867684 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-07 01:04:07.867687 | orchestrator | 2026-01-07 01:04:07.867691 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-07 01:04:07.867695 | orchestrator | Wednesday 07 January 2026 01:02:20 +0000 (0:00:04.331) 0:00:49.809 ***** 2026-01-07 01:04:07.867699 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-07 01:04:07.867702 | orchestrator | 2026-01-07 01:04:07.867707 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-07 01:04:07.867714 | orchestrator | Wednesday 07 January 2026 01:02:20 +0000 (0:00:00.444) 0:00:50.253 ***** 2026-01-07 01:04:07.867720 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:04:07.867726 | orchestrator | 2026-01-07 01:04:07.867732 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-07 01:04:07.867751 | orchestrator | Wednesday 07 January 2026 01:02:20 +0000 (0:00:00.103) 0:00:50.356 ***** 2026-01-07 01:04:07.867758 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:04:07.867765 | orchestrator | 2026-01-07 01:04:07.867772 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-07 01:04:07.867792 | orchestrator | Wednesday 07 January 2026 01:02:21 +0000 (0:00:00.382) 0:00:50.738 ***** 2026-01-07 01:04:07.867796 | orchestrator | changed: [testbed-manager] 2026-01-07 01:04:07.867800 | orchestrator | 2026-01-07 01:04:07.867803 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-07 01:04:07.867807 | orchestrator | Wednesday 07 January 2026 01:02:22 +0000 (0:00:01.256) 0:00:51.995 ***** 2026-01-07 01:04:07.867811 | orchestrator | changed: [testbed-manager] 2026-01-07 01:04:07.867815 | orchestrator | 2026-01-07 01:04:07.867818 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-07 01:04:07.867822 | orchestrator | Wednesday 07 January 2026 01:02:23 +0000 (0:00:00.802) 0:00:52.798 ***** 2026-01-07 01:04:07.867826 | orchestrator | changed: [testbed-manager] 2026-01-07 01:04:07.867830 | orchestrator | 2026-01-07 01:04:07.867834 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-07 01:04:07.867837 | orchestrator | Wednesday 07 January 2026 01:02:23 +0000 (0:00:00.429) 0:00:53.228 ***** 2026-01-07 01:04:07.867841 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-07 01:04:07.867845 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-07 01:04:07.867849 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-07 01:04:07.867853 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-07 01:04:07.867857 | orchestrator | 2026-01-07 01:04:07.867860 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:04:07.867864 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-07 01:04:07.867869 | orchestrator | 2026-01-07 01:04:07.867878 | orchestrator | 2026-01-07 01:04:07.867889 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:04:07.867893 | orchestrator | Wednesday 07 January 2026 01:02:24 +0000 (0:00:01.185) 0:00:54.414 ***** 2026-01-07 01:04:07.867897 | orchestrator | =============================================================================== 2026-01-07 01:04:07.867901 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.08s 2026-01-07 01:04:07.867904 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.33s 2026-01-07 01:04:07.867908 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.26s 2026-01-07 01:04:07.867912 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.26s 2026-01-07 01:04:07.867916 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.19s 2026-01-07 01:04:07.867919 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.06s 2026-01-07 01:04:07.867923 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.91s 2026-01-07 01:04:07.867929 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.80s 2026-01-07 01:04:07.867939 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.77s 2026-01-07 01:04:07.867945 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.44s 2026-01-07 01:04:07.867951 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.43s 2026-01-07 01:04:07.867957 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.38s 2026-01-07 01:04:07.867963 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2026-01-07 01:04:07.867969 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.10s 2026-01-07 01:04:07.867974 | orchestrator | 2026-01-07 01:04:07.867980 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-07 01:04:07.867986 | orchestrator | 2.16.14 2026-01-07 01:04:07.867992 | orchestrator | 2026-01-07 01:04:07.867998 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-07 01:04:07.868005 | orchestrator | 2026-01-07 01:04:07.868011 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-07 01:04:07.868018 | orchestrator | Wednesday 07 January 2026 01:02:28 +0000 (0:00:00.197) 0:00:00.197 ***** 2026-01-07 01:04:07.868023 | orchestrator | changed: [testbed-manager] 2026-01-07 01:04:07.868027 | orchestrator | 2026-01-07 01:04:07.868031 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-07 01:04:07.868035 | orchestrator | Wednesday 07 January 2026 01:02:29 +0000 (0:00:01.630) 0:00:01.828 ***** 2026-01-07 01:04:07.868038 | orchestrator | changed: [testbed-manager] 2026-01-07 01:04:07.868042 | orchestrator | 2026-01-07 01:04:07.868046 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-07 01:04:07.868050 | orchestrator | Wednesday 07 January 2026 01:02:30 +0000 (0:00:00.947) 0:00:02.775 ***** 2026-01-07 01:04:07.868053 | orchestrator | changed: [testbed-manager] 2026-01-07 01:04:07.868057 | orchestrator | 2026-01-07 01:04:07.868061 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-07 01:04:07.868064 | orchestrator | Wednesday 07 January 2026 01:02:31 +0000 (0:00:00.979) 0:00:03.754 ***** 2026-01-07 01:04:07.868068 | orchestrator | changed: [testbed-manager] 2026-01-07 01:04:07.868072 | orchestrator | 2026-01-07 01:04:07.868076 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-07 01:04:07.868079 | orchestrator | Wednesday 07 January 2026 01:02:32 +0000 (0:00:01.161) 0:00:04.916 ***** 2026-01-07 01:04:07.868083 | orchestrator | changed: [testbed-manager] 2026-01-07 01:04:07.868087 | orchestrator | 2026-01-07 01:04:07.868091 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-07 01:04:07.868095 | orchestrator | Wednesday 07 January 2026 01:02:33 +0000 (0:00:01.043) 0:00:05.959 ***** 2026-01-07 01:04:07.868100 | orchestrator | changed: [testbed-manager] 2026-01-07 01:04:07.868110 | orchestrator | 2026-01-07 01:04:07.868117 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-07 01:04:07.868122 | orchestrator | Wednesday 07 January 2026 01:02:34 +0000 (0:00:01.057) 0:00:07.017 ***** 2026-01-07 01:04:07.868127 | orchestrator | changed: [testbed-manager] 2026-01-07 01:04:07.868131 | orchestrator | 2026-01-07 01:04:07.868136 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-07 01:04:07.868141 | orchestrator | Wednesday 07 January 2026 01:02:37 +0000 (0:00:02.092) 0:00:09.110 ***** 2026-01-07 01:04:07.868145 | orchestrator | changed: [testbed-manager] 2026-01-07 01:04:07.868149 | orchestrator | 2026-01-07 01:04:07.868154 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-07 01:04:07.868158 | orchestrator | Wednesday 07 January 2026 01:02:38 +0000 (0:00:01.175) 0:00:10.285 ***** 2026-01-07 01:04:07.868163 | orchestrator | changed: [testbed-manager] 2026-01-07 01:04:07.868167 | orchestrator | 2026-01-07 01:04:07.868172 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-07 01:04:07.868177 | orchestrator | Wednesday 07 January 2026 01:03:41 +0000 (0:01:03.003) 0:01:13.288 ***** 2026-01-07 01:04:07.868181 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:04:07.868186 | orchestrator | 2026-01-07 01:04:07.868191 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-07 01:04:07.868196 | orchestrator | 2026-01-07 01:04:07.868200 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-07 01:04:07.868205 | orchestrator | Wednesday 07 January 2026 01:03:41 +0000 (0:00:00.152) 0:01:13.441 ***** 2026-01-07 01:04:07.868209 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:04:07.868213 | orchestrator | 2026-01-07 01:04:07.868218 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-07 01:04:07.868222 | orchestrator | 2026-01-07 01:04:07.868227 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-07 01:04:07.868231 | orchestrator | Wednesday 07 January 2026 01:03:52 +0000 (0:00:11.477) 0:01:24.918 ***** 2026-01-07 01:04:07.868236 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:04:07.868241 | orchestrator | 2026-01-07 01:04:07.868249 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-07 01:04:07.868254 | orchestrator | 2026-01-07 01:04:07.868259 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-07 01:04:07.868264 | orchestrator | Wednesday 07 January 2026 01:03:54 +0000 (0:00:01.222) 0:01:26.140 ***** 2026-01-07 01:04:07.868269 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:04:07.868274 | orchestrator | 2026-01-07 01:04:07.868278 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:04:07.868282 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-07 01:04:07.868286 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:04:07.868290 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:04:07.868294 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:04:07.868298 | orchestrator | 2026-01-07 01:04:07.868301 | orchestrator | 2026-01-07 01:04:07.868305 | orchestrator | 2026-01-07 01:04:07.868309 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:04:07.868313 | orchestrator | Wednesday 07 January 2026 01:04:05 +0000 (0:00:11.027) 0:01:37.168 ***** 2026-01-07 01:04:07.868317 | orchestrator | =============================================================================== 2026-01-07 01:04:07.868320 | orchestrator | Create admin user ------------------------------------------------------ 63.00s 2026-01-07 01:04:07.868324 | orchestrator | Restart ceph manager service ------------------------------------------- 23.73s 2026-01-07 01:04:07.868332 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.09s 2026-01-07 01:04:07.868336 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.63s 2026-01-07 01:04:07.868340 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.18s 2026-01-07 01:04:07.868343 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.16s 2026-01-07 01:04:07.868347 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.06s 2026-01-07 01:04:07.868351 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.04s 2026-01-07 01:04:07.868355 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.98s 2026-01-07 01:04:07.868358 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.95s 2026-01-07 01:04:07.868363 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2026-01-07 01:04:07.869220 | orchestrator | 2026-01-07 01:04:07 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:07.869258 | orchestrator | 2026-01-07 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:10.891909 | orchestrator | 2026-01-07 01:04:10 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:10.892206 | orchestrator | 2026-01-07 01:04:10 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:10.893051 | orchestrator | 2026-01-07 01:04:10 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:04:10.893482 | orchestrator | 2026-01-07 01:04:10 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:10.893508 | orchestrator | 2026-01-07 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:13.915394 | orchestrator | 2026-01-07 01:04:13 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:13.915495 | orchestrator | 2026-01-07 01:04:13 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:13.916289 | orchestrator | 2026-01-07 01:04:13 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:04:13.917283 | orchestrator | 2026-01-07 01:04:13 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:13.917326 | orchestrator | 2026-01-07 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:16.952133 | orchestrator | 2026-01-07 01:04:16 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:16.952852 | orchestrator | 2026-01-07 01:04:16 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:16.956049 | orchestrator | 2026-01-07 01:04:16 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:04:16.956545 | orchestrator | 2026-01-07 01:04:16 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:16.956700 | orchestrator | 2026-01-07 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:19.984138 | orchestrator | 2026-01-07 01:04:19 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:19.985787 | orchestrator | 2026-01-07 01:04:19 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:19.986222 | orchestrator | 2026-01-07 01:04:19 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state STARTED 2026-01-07 01:04:19.987005 | orchestrator | 2026-01-07 01:04:19 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:19.987057 | orchestrator | 2026-01-07 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:23.015494 | orchestrator | 2026-01-07 01:04:23 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:23.016059 | orchestrator | 2026-01-07 01:04:23 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:23.017930 | orchestrator | 2026-01-07 01:04:23 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:04:23.019054 | orchestrator | 2026-01-07 01:04:23.019130 | orchestrator | 2026-01-07 01:04:23 | INFO  | Task 831a0331-bd5f-434b-b288-e478ca1ef044 is in state SUCCESS 2026-01-07 01:04:23.020162 | orchestrator | 2026-01-07 01:04:23.020198 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:04:23.020206 | orchestrator | 2026-01-07 01:04:23.020213 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:04:23.020216 | orchestrator | Wednesday 07 January 2026 01:02:22 +0000 (0:00:00.251) 0:00:00.251 ***** 2026-01-07 01:04:23.020220 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:04:23.020224 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:04:23.020227 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:04:23.020230 | orchestrator | 2026-01-07 01:04:23.020233 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:04:23.020237 | orchestrator | Wednesday 07 January 2026 01:02:22 +0000 (0:00:00.480) 0:00:00.731 ***** 2026-01-07 01:04:23.020242 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-07 01:04:23.020248 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-07 01:04:23.020253 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-07 01:04:23.020258 | orchestrator | 2026-01-07 01:04:23.020266 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-07 01:04:23.020272 | orchestrator | 2026-01-07 01:04:23.020277 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-07 01:04:23.020283 | orchestrator | Wednesday 07 January 2026 01:02:23 +0000 (0:00:00.508) 0:00:01.240 ***** 2026-01-07 01:04:23.020288 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:04:23.020293 | orchestrator | 2026-01-07 01:04:23.020299 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-01-07 01:04:23.020304 | orchestrator | Wednesday 07 January 2026 01:02:23 +0000 (0:00:00.516) 0:00:01.757 ***** 2026-01-07 01:04:23.020308 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-01-07 01:04:23.020311 | orchestrator | 2026-01-07 01:04:23.020315 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-01-07 01:04:23.020318 | orchestrator | Wednesday 07 January 2026 01:02:27 +0000 (0:00:03.681) 0:00:05.439 ***** 2026-01-07 01:04:23.020321 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-01-07 01:04:23.020324 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-01-07 01:04:23.020327 | orchestrator | 2026-01-07 01:04:23.020331 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-01-07 01:04:23.020345 | orchestrator | Wednesday 07 January 2026 01:02:34 +0000 (0:00:06.989) 0:00:12.428 ***** 2026-01-07 01:04:23.020351 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-01-07 01:04:23.020356 | orchestrator | 2026-01-07 01:04:23.020361 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-01-07 01:04:23.020367 | orchestrator | Wednesday 07 January 2026 01:02:37 +0000 (0:00:03.039) 0:00:15.467 ***** 2026-01-07 01:04:23.020372 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:04:23.020377 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-01-07 01:04:23.020382 | orchestrator | 2026-01-07 01:04:23.020387 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-01-07 01:04:23.020406 | orchestrator | Wednesday 07 January 2026 01:02:41 +0000 (0:00:03.745) 0:00:19.213 ***** 2026-01-07 01:04:23.020411 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:04:23.020415 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-01-07 01:04:23.020418 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-01-07 01:04:23.020423 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-01-07 01:04:23.020428 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-01-07 01:04:23.020434 | orchestrator | 2026-01-07 01:04:23.020438 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-01-07 01:04:23.020444 | orchestrator | Wednesday 07 January 2026 01:02:57 +0000 (0:00:16.375) 0:00:35.589 ***** 2026-01-07 01:04:23.020595 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-01-07 01:04:23.020604 | orchestrator | 2026-01-07 01:04:23.020609 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-01-07 01:04:23.020614 | orchestrator | Wednesday 07 January 2026 01:03:01 +0000 (0:00:04.427) 0:00:40.016 ***** 2026-01-07 01:04:23.020622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:04:23.020636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:04:23.020642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:04:23.020663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.020670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.020675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.020692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.020698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.020702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.020707 | orchestrator | 2026-01-07 01:04:23.020711 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-01-07 01:04:23.020719 | orchestrator | Wednesday 07 January 2026 01:03:04 +0000 (0:00:02.532) 0:00:42.549 ***** 2026-01-07 01:04:23.020724 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-01-07 01:04:23.020729 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-01-07 01:04:23.020737 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-01-07 01:04:23.020742 | orchestrator | 2026-01-07 01:04:23.020792 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-01-07 01:04:23.020798 | orchestrator | Wednesday 07 January 2026 01:03:05 +0000 (0:00:00.944) 0:00:43.493 ***** 2026-01-07 01:04:23.020803 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:04:23.020808 | orchestrator | 2026-01-07 01:04:23.020814 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-01-07 01:04:23.020819 | orchestrator | Wednesday 07 January 2026 01:03:05 +0000 (0:00:00.099) 0:00:43.592 ***** 2026-01-07 01:04:23.020824 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:04:23.020829 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:04:23.020834 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:04:23.020840 | orchestrator | 2026-01-07 01:04:23.020845 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-07 01:04:23.020850 | orchestrator | Wednesday 07 January 2026 01:03:05 +0000 (0:00:00.373) 0:00:43.965 ***** 2026-01-07 01:04:23.020856 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:04:23.020861 | orchestrator | 2026-01-07 01:04:23.020866 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-01-07 01:04:23.020872 | orchestrator | Wednesday 07 January 2026 01:03:06 +0000 (0:00:00.454) 0:00:44.420 ***** 2026-01-07 01:04:23.020877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:04:23.020888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:04:23.020894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:04:23.020906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.020912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.020917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.020925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.020932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.020935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.020941 | orchestrator | 2026-01-07 01:04:23.020944 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-01-07 01:04:23.020947 | orchestrator | Wednesday 07 January 2026 01:03:09 +0000 (0:00:03.135) 0:00:47.555 ***** 2026-01-07 01:04:23.020952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:04:23.020956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.020959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.020963 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:04:23.020969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:04:23.020976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.020979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.020984 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:04:23.020988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:04:23.020991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.020995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.020998 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:04:23.021001 | orchestrator | 2026-01-07 01:04:23.021006 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-01-07 01:04:23.021009 | orchestrator | Wednesday 07 January 2026 01:03:10 +0000 (0:00:01.126) 0:00:48.682 ***** 2026-01-07 01:04:23.021016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:04:23.021019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.021024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.021028 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:04:23.021031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:04:23.021034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.021040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.021045 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:04:23.021048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:04:23.021054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.021057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.021060 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:04:23.021064 | orchestrator | 2026-01-07 01:04:23.021067 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-01-07 01:04:23.021070 | orchestrator | Wednesday 07 January 2026 01:03:12 +0000 (0:00:01.935) 0:00:50.618 ***** 2026-01-07 01:04:23.021073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:04:23.021082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:04:23.021085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:04:23.021091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021116 | orchestrator | 2026-01-07 01:04:23.021120 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-01-07 01:04:23.021123 | orchestrator | Wednesday 07 January 2026 01:03:15 +0000 (0:00:03.397) 0:00:54.016 ***** 2026-01-07 01:04:23.021126 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:04:23.021129 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:04:23.021133 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:04:23.021136 | orchestrator | 2026-01-07 01:04:23.021139 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-01-07 01:04:23.021144 | orchestrator | Wednesday 07 January 2026 01:03:18 +0000 (0:00:02.426) 0:00:56.442 ***** 2026-01-07 01:04:23.021148 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:04:23.021151 | orchestrator | 2026-01-07 01:04:23.021154 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-01-07 01:04:23.021157 | orchestrator | Wednesday 07 January 2026 01:03:19 +0000 (0:00:00.823) 0:00:57.266 ***** 2026-01-07 01:04:23.021160 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:04:23.021164 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:04:23.021167 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:04:23.021170 | orchestrator | 2026-01-07 01:04:23.021173 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-01-07 01:04:23.021176 | orchestrator | Wednesday 07 January 2026 01:03:20 +0000 (0:00:01.145) 0:00:58.411 ***** 2026-01-07 01:04:23.021179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:04:23.021188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:04:23.021191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:04:23.021195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021221 | orchestrator | 2026-01-07 01:04:23.021224 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-01-07 01:04:23.021227 | orchestrator | Wednesday 07 January 2026 01:03:29 +0000 (0:00:09.326) 0:01:07.738 ***** 2026-01-07 01:04:23.021232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:04:23.021236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.021243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.021247 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:04:23.021253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:04:23.021257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.021261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.021265 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:04:23.021271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-07 01:04:23.021275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.021281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:04:23.021284 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:04:23.021288 | orchestrator | 2026-01-07 01:04:23.021292 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-01-07 01:04:23.021296 | orchestrator | Wednesday 07 January 2026 01:03:30 +0000 (0:00:01.029) 0:01:08.768 ***** 2026-01-07 01:04:23.021303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:04:23.021307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:04:23.021313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-07 01:04:23.021320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:04:23.021350 | orchestrator | 2026-01-07 01:04:23.021354 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-07 01:04:23.021358 | orchestrator | Wednesday 07 January 2026 01:03:34 +0000 (0:00:03.763) 0:01:12.531 ***** 2026-01-07 01:04:23.021362 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:04:23.021366 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:04:23.021370 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:04:23.021374 | orchestrator | 2026-01-07 01:04:23.021378 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-01-07 01:04:23.021382 | orchestrator | Wednesday 07 January 2026 01:03:34 +0000 (0:00:00.492) 0:01:13.023 ***** 2026-01-07 01:04:23.021386 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:04:23.021390 | orchestrator | 2026-01-07 01:04:23.021394 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-01-07 01:04:23.021397 | orchestrator | Wednesday 07 January 2026 01:03:37 +0000 (0:00:02.283) 0:01:15.307 ***** 2026-01-07 01:04:23.021401 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:04:23.021404 | orchestrator | 2026-01-07 01:04:23.021408 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-01-07 01:04:23.021413 | orchestrator | Wednesday 07 January 2026 01:03:39 +0000 (0:00:02.545) 0:01:17.853 ***** 2026-01-07 01:04:23.021417 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:04:23.021421 | orchestrator | 2026-01-07 01:04:23.021425 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-07 01:04:23.021429 | orchestrator | Wednesday 07 January 2026 01:03:51 +0000 (0:00:11.422) 0:01:29.275 ***** 2026-01-07 01:04:23.021432 | orchestrator | 2026-01-07 01:04:23.021436 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-07 01:04:23.021440 | orchestrator | Wednesday 07 January 2026 01:03:51 +0000 (0:00:00.120) 0:01:29.396 ***** 2026-01-07 01:04:23.021444 | orchestrator | 2026-01-07 01:04:23.021447 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-07 01:04:23.021451 | orchestrator | Wednesday 07 January 2026 01:03:51 +0000 (0:00:00.122) 0:01:29.519 ***** 2026-01-07 01:04:23.021455 | orchestrator | 2026-01-07 01:04:23.021459 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-01-07 01:04:23.021463 | orchestrator | Wednesday 07 January 2026 01:03:51 +0000 (0:00:00.127) 0:01:29.646 ***** 2026-01-07 01:04:23.021466 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:04:23.021472 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:04:23.021478 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:04:23.021486 | orchestrator | 2026-01-07 01:04:23.021492 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-01-07 01:04:23.021498 | orchestrator | Wednesday 07 January 2026 01:04:03 +0000 (0:00:11.968) 0:01:41.615 ***** 2026-01-07 01:04:23.021503 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:04:23.021508 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:04:23.021516 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:04:23.021521 | orchestrator | 2026-01-07 01:04:23.021526 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-01-07 01:04:23.021532 | orchestrator | Wednesday 07 January 2026 01:04:13 +0000 (0:00:10.104) 0:01:51.719 ***** 2026-01-07 01:04:23.021538 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:04:23.021544 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:04:23.021551 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:04:23.021558 | orchestrator | 2026-01-07 01:04:23.021563 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:04:23.021569 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:04:23.021578 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 01:04:23.021583 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 01:04:23.021587 | orchestrator | 2026-01-07 01:04:23.021593 | orchestrator | 2026-01-07 01:04:23.021598 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:04:23.021603 | orchestrator | Wednesday 07 January 2026 01:04:19 +0000 (0:00:06.051) 0:01:57.770 ***** 2026-01-07 01:04:23.021608 | orchestrator | =============================================================================== 2026-01-07 01:04:23.021613 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.38s 2026-01-07 01:04:23.021618 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.97s 2026-01-07 01:04:23.021624 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.42s 2026-01-07 01:04:23.021629 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.10s 2026-01-07 01:04:23.021634 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.33s 2026-01-07 01:04:23.021639 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.99s 2026-01-07 01:04:23.021645 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.05s 2026-01-07 01:04:23.021650 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.43s 2026-01-07 01:04:23.021659 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.76s 2026-01-07 01:04:23.021663 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.75s 2026-01-07 01:04:23.021666 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.68s 2026-01-07 01:04:23.021669 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.40s 2026-01-07 01:04:23.021672 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.14s 2026-01-07 01:04:23.021675 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.04s 2026-01-07 01:04:23.021678 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.55s 2026-01-07 01:04:23.021682 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.53s 2026-01-07 01:04:23.021685 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.43s 2026-01-07 01:04:23.021688 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.28s 2026-01-07 01:04:23.021691 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.94s 2026-01-07 01:04:23.021694 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 1.15s 2026-01-07 01:04:23.021697 | orchestrator | 2026-01-07 01:04:23 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:23.021701 | orchestrator | 2026-01-07 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:26.047909 | orchestrator | 2026-01-07 01:04:26 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:26.047962 | orchestrator | 2026-01-07 01:04:26 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:26.048050 | orchestrator | 2026-01-07 01:04:26 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:04:26.048696 | orchestrator | 2026-01-07 01:04:26 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:26.048731 | orchestrator | 2026-01-07 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:29.071565 | orchestrator | 2026-01-07 01:04:29 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:29.072279 | orchestrator | 2026-01-07 01:04:29 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:29.073238 | orchestrator | 2026-01-07 01:04:29 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:04:29.074066 | orchestrator | 2026-01-07 01:04:29 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:29.074092 | orchestrator | 2026-01-07 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:32.127142 | orchestrator | 2026-01-07 01:04:32 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:32.128902 | orchestrator | 2026-01-07 01:04:32 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:32.130759 | orchestrator | 2026-01-07 01:04:32 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:04:32.132201 | orchestrator | 2026-01-07 01:04:32 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:32.132250 | orchestrator | 2026-01-07 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:35.171894 | orchestrator | 2026-01-07 01:04:35 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:35.172002 | orchestrator | 2026-01-07 01:04:35 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:35.172872 | orchestrator | 2026-01-07 01:04:35 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:04:35.173513 | orchestrator | 2026-01-07 01:04:35 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:35.173550 | orchestrator | 2026-01-07 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:38.213553 | orchestrator | 2026-01-07 01:04:38 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:38.215154 | orchestrator | 2026-01-07 01:04:38 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:38.216938 | orchestrator | 2026-01-07 01:04:38 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:04:38.218142 | orchestrator | 2026-01-07 01:04:38 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:38.218180 | orchestrator | 2026-01-07 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:41.260669 | orchestrator | 2026-01-07 01:04:41 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:41.262446 | orchestrator | 2026-01-07 01:04:41 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:41.264721 | orchestrator | 2026-01-07 01:04:41 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:04:41.267164 | orchestrator | 2026-01-07 01:04:41 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:41.267210 | orchestrator | 2026-01-07 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:44.317988 | orchestrator | 2026-01-07 01:04:44 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:44.320474 | orchestrator | 2026-01-07 01:04:44 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:44.322652 | orchestrator | 2026-01-07 01:04:44 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:04:44.326449 | orchestrator | 2026-01-07 01:04:44 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:44.326566 | orchestrator | 2026-01-07 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:47.367499 | orchestrator | 2026-01-07 01:04:47 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:47.369896 | orchestrator | 2026-01-07 01:04:47 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:47.372108 | orchestrator | 2026-01-07 01:04:47 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:04:47.373398 | orchestrator | 2026-01-07 01:04:47 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:47.373441 | orchestrator | 2026-01-07 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:50.410836 | orchestrator | 2026-01-07 01:04:50 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:50.411046 | orchestrator | 2026-01-07 01:04:50 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:50.412389 | orchestrator | 2026-01-07 01:04:50 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:04:50.413344 | orchestrator | 2026-01-07 01:04:50 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:50.413387 | orchestrator | 2026-01-07 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:53.476805 | orchestrator | 2026-01-07 01:04:53 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:53.477924 | orchestrator | 2026-01-07 01:04:53 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:53.480388 | orchestrator | 2026-01-07 01:04:53 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:04:53.481928 | orchestrator | 2026-01-07 01:04:53 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:53.481970 | orchestrator | 2026-01-07 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:56.524737 | orchestrator | 2026-01-07 01:04:56 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:56.526274 | orchestrator | 2026-01-07 01:04:56 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:56.527958 | orchestrator | 2026-01-07 01:04:56 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:04:56.528814 | orchestrator | 2026-01-07 01:04:56 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:56.528859 | orchestrator | 2026-01-07 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:04:59.579106 | orchestrator | 2026-01-07 01:04:59 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:04:59.582760 | orchestrator | 2026-01-07 01:04:59 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:04:59.582817 | orchestrator | 2026-01-07 01:04:59 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:04:59.582825 | orchestrator | 2026-01-07 01:04:59 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:04:59.582936 | orchestrator | 2026-01-07 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:02.627290 | orchestrator | 2026-01-07 01:05:02 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:05:02.630863 | orchestrator | 2026-01-07 01:05:02 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:02.634206 | orchestrator | 2026-01-07 01:05:02 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:05:02.636427 | orchestrator | 2026-01-07 01:05:02 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:02.636483 | orchestrator | 2026-01-07 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:05.686172 | orchestrator | 2026-01-07 01:05:05 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:05:05.690204 | orchestrator | 2026-01-07 01:05:05 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:05.693103 | orchestrator | 2026-01-07 01:05:05 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:05:05.694723 | orchestrator | 2026-01-07 01:05:05 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:05.695118 | orchestrator | 2026-01-07 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:08.732181 | orchestrator | 2026-01-07 01:05:08 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:05:08.733863 | orchestrator | 2026-01-07 01:05:08 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:08.737854 | orchestrator | 2026-01-07 01:05:08 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:05:08.740350 | orchestrator | 2026-01-07 01:05:08 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:08.740737 | orchestrator | 2026-01-07 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:11.784219 | orchestrator | 2026-01-07 01:05:11 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state STARTED 2026-01-07 01:05:11.786415 | orchestrator | 2026-01-07 01:05:11 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:11.789750 | orchestrator | 2026-01-07 01:05:11 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:05:11.793870 | orchestrator | 2026-01-07 01:05:11 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:11.793926 | orchestrator | 2026-01-07 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:14.831739 | orchestrator | 2026-01-07 01:05:14 | INFO  | Task e4d85ca5-7b43-4488-a1fc-149f32ce7dbe is in state SUCCESS 2026-01-07 01:05:14.833034 | orchestrator | 2026-01-07 01:05:14.833071 | orchestrator | 2026-01-07 01:05:14.833077 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:05:14.833084 | orchestrator | 2026-01-07 01:05:14.833089 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:05:14.833098 | orchestrator | Wednesday 07 January 2026 01:02:22 +0000 (0:00:00.550) 0:00:00.550 ***** 2026-01-07 01:05:14.833106 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:05:14.833115 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:05:14.833123 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:05:14.833130 | orchestrator | 2026-01-07 01:05:14.833138 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:05:14.833146 | orchestrator | Wednesday 07 January 2026 01:02:23 +0000 (0:00:00.523) 0:00:01.074 ***** 2026-01-07 01:05:14.833154 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-07 01:05:14.833162 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-07 01:05:14.833170 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-07 01:05:14.833178 | orchestrator | 2026-01-07 01:05:14.833187 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-07 01:05:14.833195 | orchestrator | 2026-01-07 01:05:14.833203 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-07 01:05:14.833211 | orchestrator | Wednesday 07 January 2026 01:02:23 +0000 (0:00:00.460) 0:00:01.535 ***** 2026-01-07 01:05:14.833237 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:05:14.833247 | orchestrator | 2026-01-07 01:05:14.833255 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-01-07 01:05:14.833263 | orchestrator | Wednesday 07 January 2026 01:02:24 +0000 (0:00:00.537) 0:00:02.072 ***** 2026-01-07 01:05:14.833271 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-01-07 01:05:14.833280 | orchestrator | 2026-01-07 01:05:14.833288 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-01-07 01:05:14.833472 | orchestrator | Wednesday 07 January 2026 01:02:27 +0000 (0:00:03.422) 0:00:05.495 ***** 2026-01-07 01:05:14.833478 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-01-07 01:05:14.833484 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-01-07 01:05:14.833488 | orchestrator | 2026-01-07 01:05:14.833494 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-01-07 01:05:14.833511 | orchestrator | Wednesday 07 January 2026 01:02:34 +0000 (0:00:06.910) 0:00:12.405 ***** 2026-01-07 01:05:14.833519 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:05:14.833527 | orchestrator | 2026-01-07 01:05:14.833534 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-01-07 01:05:14.833542 | orchestrator | Wednesday 07 January 2026 01:02:37 +0000 (0:00:03.141) 0:00:15.547 ***** 2026-01-07 01:05:14.833550 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:05:14.833557 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-01-07 01:05:14.833565 | orchestrator | 2026-01-07 01:05:14.833574 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-01-07 01:05:14.833582 | orchestrator | Wednesday 07 January 2026 01:02:41 +0000 (0:00:03.736) 0:00:19.284 ***** 2026-01-07 01:05:14.833590 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:05:14.833598 | orchestrator | 2026-01-07 01:05:14.833640 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-01-07 01:05:14.833649 | orchestrator | Wednesday 07 January 2026 01:02:44 +0000 (0:00:03.216) 0:00:22.501 ***** 2026-01-07 01:05:14.833657 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-01-07 01:05:14.833666 | orchestrator | 2026-01-07 01:05:14.833707 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-01-07 01:05:14.833714 | orchestrator | Wednesday 07 January 2026 01:02:48 +0000 (0:00:03.782) 0:00:26.283 ***** 2026-01-07 01:05:14.833721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:05:14.833742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:05:14.833756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:05:14.833766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.833775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.833783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.833810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.833830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.833839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.833848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.833862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.833870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.833879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.833888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.833905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.833913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.833924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.833932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.833941 | orchestrator | 2026-01-07 01:05:14.833999 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-01-07 01:05:14.834485 | orchestrator | Wednesday 07 January 2026 01:02:51 +0000 (0:00:02.673) 0:00:28.957 ***** 2026-01-07 01:05:14.834506 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:14.834515 | orchestrator | 2026-01-07 01:05:14.834523 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-01-07 01:05:14.834532 | orchestrator | Wednesday 07 January 2026 01:02:51 +0000 (0:00:00.130) 0:00:29.087 ***** 2026-01-07 01:05:14.834540 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:14.834548 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:14.834556 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:14.834564 | orchestrator | 2026-01-07 01:05:14.834572 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-07 01:05:14.834580 | orchestrator | Wednesday 07 January 2026 01:02:51 +0000 (0:00:00.282) 0:00:29.370 ***** 2026-01-07 01:05:14.834589 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:05:14.834604 | orchestrator | 2026-01-07 01:05:14.834612 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-01-07 01:05:14.834616 | orchestrator | Wednesday 07 January 2026 01:02:52 +0000 (0:00:00.722) 0:00:30.092 ***** 2026-01-07 01:05:14.834645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:05:14.834652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:05:14.834662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:05:14.834667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.834711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.834724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.834746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.834752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.834757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.834765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.834770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.834780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.834785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.834818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.834824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.834829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.834836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.834841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.834852 | orchestrator | 2026-01-07 01:05:14.834857 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-01-07 01:05:14.834862 | orchestrator | Wednesday 07 January 2026 01:02:59 +0000 (0:00:06.731) 0:00:36.824 ***** 2026-01-07 01:05:14.834867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:05:14.834888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:05:14.834930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.834947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.834955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.834968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.834973 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:14.834978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:05:14.834998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:05:14.835004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835029 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:14.835033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:05:14.835051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:05:14.835057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835082 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:14.835087 | orchestrator | 2026-01-07 01:05:14.835091 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-01-07 01:05:14.835097 | orchestrator | Wednesday 07 January 2026 01:03:00 +0000 (0:00:01.465) 0:00:38.290 ***** 2026-01-07 01:05:14.835105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:05:14.835130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:05:14.835140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835181 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:14.835189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:05:14.835211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:05:14.835216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835255 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:14.835277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:05:14.835300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:05:14.835307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835349 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:14.835357 | orchestrator | 2026-01-07 01:05:14.835365 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-01-07 01:05:14.835373 | orchestrator | Wednesday 07 January 2026 01:03:02 +0000 (0:00:01.634) 0:00:39.925 ***** 2026-01-07 01:05:14.835381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:05:14.835412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:05:14.835418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:05:14.835430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835544 | orchestrator | 2026-01-07 01:05:14.835549 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-01-07 01:05:14.835553 | orchestrator | Wednesday 07 January 2026 01:03:09 +0000 (0:00:07.194) 0:00:47.119 ***** 2026-01-07 01:05:14.835558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:05:14.835577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:05:14.835587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:05:14.835595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835714 | orchestrator | 2026-01-07 01:05:14.835720 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-01-07 01:05:14.835725 | orchestrator | Wednesday 07 January 2026 01:03:29 +0000 (0:00:20.156) 0:01:07.275 ***** 2026-01-07 01:05:14.835731 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-07 01:05:14.835736 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-07 01:05:14.835742 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-07 01:05:14.835747 | orchestrator | 2026-01-07 01:05:14.835753 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-01-07 01:05:14.835758 | orchestrator | Wednesday 07 January 2026 01:03:34 +0000 (0:00:04.896) 0:01:12.172 ***** 2026-01-07 01:05:14.835763 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-07 01:05:14.835769 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-07 01:05:14.835774 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-07 01:05:14.835779 | orchestrator | 2026-01-07 01:05:14.835784 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-01-07 01:05:14.835789 | orchestrator | Wednesday 07 January 2026 01:03:37 +0000 (0:00:03.084) 0:01:15.256 ***** 2026-01-07 01:05:14.835803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:05:14.835809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:05:14.835817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:05:14.835823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.835961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.835985 | orchestrator | 2026-01-07 01:05:14.835990 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-01-07 01:05:14.835995 | orchestrator | Wednesday 07 January 2026 01:03:40 +0000 (0:00:03.145) 0:01:18.402 ***** 2026-01-07 01:05:14.836004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:05:14.836009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:05:14.836016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:05:14.836021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836142 | orchestrator | 2026-01-07 01:05:14.836151 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-07 01:05:14.836159 | orchestrator | Wednesday 07 January 2026 01:03:43 +0000 (0:00:02.686) 0:01:21.088 ***** 2026-01-07 01:05:14.836166 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:14.836172 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:14.836180 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:14.836187 | orchestrator | 2026-01-07 01:05:14.836195 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-01-07 01:05:14.836202 | orchestrator | Wednesday 07 January 2026 01:03:43 +0000 (0:00:00.554) 0:01:21.642 ***** 2026-01-07 01:05:14.836216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:05:14.836225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:05:14.836232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836258 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:14.836266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:05:14.836271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:05:14.836276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-07 01:05:14.836295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-07 01:05:14.836302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:05:14.836372 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:14.836381 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:14.836389 | orchestrator | 2026-01-07 01:05:14.836398 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-01-07 01:05:14.836407 | orchestrator | Wednesday 07 January 2026 01:03:45 +0000 (0:00:01.598) 0:01:23.241 ***** 2026-01-07 01:05:14.836417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:05:14.836423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:05:14.836430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-07 01:05:14.836441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:05:14.836531 | orchestrator | 2026-01-07 01:05:14.836535 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-07 01:05:14.836540 | orchestrator | Wednesday 07 January 2026 01:03:50 +0000 (0:00:04.715) 0:01:27.956 ***** 2026-01-07 01:05:14.836544 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:14.836549 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:14.836554 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:14.836558 | orchestrator | 2026-01-07 01:05:14.836563 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-01-07 01:05:14.836567 | orchestrator | Wednesday 07 January 2026 01:03:50 +0000 (0:00:00.225) 0:01:28.182 ***** 2026-01-07 01:05:14.836572 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-01-07 01:05:14.836576 | orchestrator | 2026-01-07 01:05:14.836581 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-01-07 01:05:14.836585 | orchestrator | Wednesday 07 January 2026 01:03:52 +0000 (0:00:01.939) 0:01:30.121 ***** 2026-01-07 01:05:14.836590 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 01:05:14.836595 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-01-07 01:05:14.836599 | orchestrator | 2026-01-07 01:05:14.836605 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-01-07 01:05:14.836613 | orchestrator | Wednesday 07 January 2026 01:03:54 +0000 (0:00:02.075) 0:01:32.197 ***** 2026-01-07 01:05:14.836619 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:14.836624 | orchestrator | 2026-01-07 01:05:14.836629 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-07 01:05:14.836635 | orchestrator | Wednesday 07 January 2026 01:04:09 +0000 (0:00:15.322) 0:01:47.519 ***** 2026-01-07 01:05:14.836640 | orchestrator | 2026-01-07 01:05:14.836645 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-07 01:05:14.836655 | orchestrator | Wednesday 07 January 2026 01:04:10 +0000 (0:00:00.221) 0:01:47.741 ***** 2026-01-07 01:05:14.836660 | orchestrator | 2026-01-07 01:05:14.836665 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-07 01:05:14.836671 | orchestrator | Wednesday 07 January 2026 01:04:10 +0000 (0:00:00.120) 0:01:47.862 ***** 2026-01-07 01:05:14.836711 | orchestrator | 2026-01-07 01:05:14.836717 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-01-07 01:05:14.836722 | orchestrator | Wednesday 07 January 2026 01:04:10 +0000 (0:00:00.079) 0:01:47.941 ***** 2026-01-07 01:05:14.836727 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:14.836733 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:14.836738 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:14.836743 | orchestrator | 2026-01-07 01:05:14.836749 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-01-07 01:05:14.836754 | orchestrator | Wednesday 07 January 2026 01:04:18 +0000 (0:00:08.522) 0:01:56.464 ***** 2026-01-07 01:05:14.836760 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:14.836765 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:14.836771 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:14.836776 | orchestrator | 2026-01-07 01:05:14.836781 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-01-07 01:05:14.836786 | orchestrator | Wednesday 07 January 2026 01:04:29 +0000 (0:00:10.652) 0:02:07.116 ***** 2026-01-07 01:05:14.836791 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:14.836796 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:14.836802 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:14.836807 | orchestrator | 2026-01-07 01:05:14.836813 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-01-07 01:05:14.836818 | orchestrator | Wednesday 07 January 2026 01:04:34 +0000 (0:00:05.346) 0:02:12.463 ***** 2026-01-07 01:05:14.836823 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:14.836828 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:14.836834 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:14.836839 | orchestrator | 2026-01-07 01:05:14.836845 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-01-07 01:05:14.836853 | orchestrator | Wednesday 07 January 2026 01:04:44 +0000 (0:00:10.038) 0:02:22.501 ***** 2026-01-07 01:05:14.836859 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:14.836864 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:14.836869 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:14.836874 | orchestrator | 2026-01-07 01:05:14.836880 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-01-07 01:05:14.836885 | orchestrator | Wednesday 07 January 2026 01:04:55 +0000 (0:00:10.383) 0:02:32.884 ***** 2026-01-07 01:05:14.836890 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:14.836895 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:14.836901 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:14.836906 | orchestrator | 2026-01-07 01:05:14.836911 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-01-07 01:05:14.836916 | orchestrator | Wednesday 07 January 2026 01:05:05 +0000 (0:00:10.806) 0:02:43.690 ***** 2026-01-07 01:05:14.836922 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:14.836927 | orchestrator | 2026-01-07 01:05:14.836932 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:05:14.836938 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:05:14.836944 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 01:05:14.836950 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 01:05:14.836959 | orchestrator | 2026-01-07 01:05:14.836965 | orchestrator | 2026-01-07 01:05:14.836970 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:05:14.836975 | orchestrator | Wednesday 07 January 2026 01:05:12 +0000 (0:00:07.013) 0:02:50.704 ***** 2026-01-07 01:05:14.836980 | orchestrator | =============================================================================== 2026-01-07 01:05:14.836985 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.16s 2026-01-07 01:05:14.836989 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.32s 2026-01-07 01:05:14.836994 | orchestrator | designate : Restart designate-worker container ------------------------- 10.81s 2026-01-07 01:05:14.836998 | orchestrator | designate : Restart designate-api container ---------------------------- 10.65s 2026-01-07 01:05:14.837003 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.38s 2026-01-07 01:05:14.837007 | orchestrator | designate : Restart designate-producer container ----------------------- 10.04s 2026-01-07 01:05:14.837012 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.52s 2026-01-07 01:05:14.837016 | orchestrator | designate : Copying over config.json files for services ----------------- 7.19s 2026-01-07 01:05:14.837021 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.01s 2026-01-07 01:05:14.837026 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.91s 2026-01-07 01:05:14.837033 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.73s 2026-01-07 01:05:14.837038 | orchestrator | designate : Restart designate-central container ------------------------- 5.35s 2026-01-07 01:05:14.837042 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.90s 2026-01-07 01:05:14.837047 | orchestrator | designate : Check designate containers ---------------------------------- 4.71s 2026-01-07 01:05:14.837051 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.78s 2026-01-07 01:05:14.837059 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.74s 2026-01-07 01:05:14.837066 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.42s 2026-01-07 01:05:14.837074 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.22s 2026-01-07 01:05:14.837082 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.15s 2026-01-07 01:05:14.837090 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.14s 2026-01-07 01:05:14.837098 | orchestrator | 2026-01-07 01:05:14 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:14.837106 | orchestrator | 2026-01-07 01:05:14 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:05:14.837114 | orchestrator | 2026-01-07 01:05:14 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:05:14.838710 | orchestrator | 2026-01-07 01:05:14 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:14.838737 | orchestrator | 2026-01-07 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:17.868623 | orchestrator | 2026-01-07 01:05:17 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:17.868837 | orchestrator | 2026-01-07 01:05:17 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:05:17.869662 | orchestrator | 2026-01-07 01:05:17 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:05:17.870255 | orchestrator | 2026-01-07 01:05:17 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:17.870279 | orchestrator | 2026-01-07 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:20.908934 | orchestrator | 2026-01-07 01:05:20 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:20.911511 | orchestrator | 2026-01-07 01:05:20 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:05:20.913573 | orchestrator | 2026-01-07 01:05:20 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:05:20.915633 | orchestrator | 2026-01-07 01:05:20 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:20.915712 | orchestrator | 2026-01-07 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:23.959347 | orchestrator | 2026-01-07 01:05:23 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:23.963621 | orchestrator | 2026-01-07 01:05:23 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:05:23.968081 | orchestrator | 2026-01-07 01:05:23 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:05:23.970186 | orchestrator | 2026-01-07 01:05:23 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:23.970248 | orchestrator | 2026-01-07 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:27.028567 | orchestrator | 2026-01-07 01:05:27 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:27.030615 | orchestrator | 2026-01-07 01:05:27 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:05:27.032339 | orchestrator | 2026-01-07 01:05:27 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:05:27.033751 | orchestrator | 2026-01-07 01:05:27 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:27.033855 | orchestrator | 2026-01-07 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:30.076946 | orchestrator | 2026-01-07 01:05:30 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:30.080157 | orchestrator | 2026-01-07 01:05:30 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:05:30.082339 | orchestrator | 2026-01-07 01:05:30 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state STARTED 2026-01-07 01:05:30.084097 | orchestrator | 2026-01-07 01:05:30 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:30.084140 | orchestrator | 2026-01-07 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:33.126170 | orchestrator | 2026-01-07 01:05:33 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:33.129678 | orchestrator | 2026-01-07 01:05:33 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:05:33.133302 | orchestrator | 2026-01-07 01:05:33.133349 | orchestrator | 2026-01-07 01:05:33.133357 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:05:33.133365 | orchestrator | 2026-01-07 01:05:33.133372 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:05:33.133379 | orchestrator | Wednesday 07 January 2026 01:04:26 +0000 (0:00:00.392) 0:00:00.392 ***** 2026-01-07 01:05:33.133386 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:05:33.133393 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:05:33.133400 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:05:33.133407 | orchestrator | 2026-01-07 01:05:33.133412 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:05:33.133416 | orchestrator | Wednesday 07 January 2026 01:04:26 +0000 (0:00:00.384) 0:00:00.777 ***** 2026-01-07 01:05:33.133420 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-07 01:05:33.133424 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-07 01:05:33.133441 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-07 01:05:33.133445 | orchestrator | 2026-01-07 01:05:33.133468 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-07 01:05:33.133472 | orchestrator | 2026-01-07 01:05:33.133476 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-07 01:05:33.133480 | orchestrator | Wednesday 07 January 2026 01:04:26 +0000 (0:00:00.296) 0:00:01.074 ***** 2026-01-07 01:05:33.133484 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:05:33.133489 | orchestrator | 2026-01-07 01:05:33.133492 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-01-07 01:05:33.133496 | orchestrator | Wednesday 07 January 2026 01:04:27 +0000 (0:00:00.789) 0:00:01.864 ***** 2026-01-07 01:05:33.133500 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-01-07 01:05:33.133504 | orchestrator | 2026-01-07 01:05:33.133507 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-01-07 01:05:33.133511 | orchestrator | Wednesday 07 January 2026 01:04:30 +0000 (0:00:03.212) 0:00:05.077 ***** 2026-01-07 01:05:33.133517 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-01-07 01:05:33.133521 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-01-07 01:05:33.133525 | orchestrator | 2026-01-07 01:05:33.133529 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-01-07 01:05:33.133532 | orchestrator | Wednesday 07 January 2026 01:04:38 +0000 (0:00:07.463) 0:00:12.540 ***** 2026-01-07 01:05:33.133536 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:05:33.133540 | orchestrator | 2026-01-07 01:05:33.133544 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-01-07 01:05:33.133547 | orchestrator | Wednesday 07 January 2026 01:04:41 +0000 (0:00:03.222) 0:00:15.763 ***** 2026-01-07 01:05:33.133551 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:05:33.133555 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-01-07 01:05:33.133559 | orchestrator | 2026-01-07 01:05:33.133562 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-01-07 01:05:33.133566 | orchestrator | Wednesday 07 January 2026 01:04:45 +0000 (0:00:04.238) 0:00:20.001 ***** 2026-01-07 01:05:33.133570 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:05:33.133574 | orchestrator | 2026-01-07 01:05:33.133578 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-01-07 01:05:33.133582 | orchestrator | Wednesday 07 January 2026 01:04:49 +0000 (0:00:03.881) 0:00:23.883 ***** 2026-01-07 01:05:33.133585 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-01-07 01:05:33.133589 | orchestrator | 2026-01-07 01:05:33.133593 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-07 01:05:33.133597 | orchestrator | Wednesday 07 January 2026 01:04:53 +0000 (0:00:04.055) 0:00:27.938 ***** 2026-01-07 01:05:33.133600 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:33.133604 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:33.133608 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:33.133612 | orchestrator | 2026-01-07 01:05:33.133615 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-01-07 01:05:33.133619 | orchestrator | Wednesday 07 January 2026 01:04:53 +0000 (0:00:00.301) 0:00:28.240 ***** 2026-01-07 01:05:33.133625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:33.133644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:33.133651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:33.133655 | orchestrator | 2026-01-07 01:05:33.133659 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-01-07 01:05:33.133663 | orchestrator | Wednesday 07 January 2026 01:04:54 +0000 (0:00:00.926) 0:00:29.166 ***** 2026-01-07 01:05:33.133667 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:33.133671 | orchestrator | 2026-01-07 01:05:33.133675 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-01-07 01:05:33.133678 | orchestrator | Wednesday 07 January 2026 01:04:55 +0000 (0:00:00.142) 0:00:29.308 ***** 2026-01-07 01:05:33.133682 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:33.133686 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:33.133690 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:33.133693 | orchestrator | 2026-01-07 01:05:33.133697 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-07 01:05:33.133701 | orchestrator | Wednesday 07 January 2026 01:04:55 +0000 (0:00:00.518) 0:00:29.827 ***** 2026-01-07 01:05:33.133705 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:05:33.133709 | orchestrator | 2026-01-07 01:05:33.133712 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-01-07 01:05:33.133717 | orchestrator | Wednesday 07 January 2026 01:04:56 +0000 (0:00:00.519) 0:00:30.347 ***** 2026-01-07 01:05:33.133721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:33.133731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:33.133735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:33.133739 | orchestrator | 2026-01-07 01:05:33.133743 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-01-07 01:05:33.133747 | orchestrator | Wednesday 07 January 2026 01:04:57 +0000 (0:00:01.776) 0:00:32.123 ***** 2026-01-07 01:05:33.133753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:05:33.133757 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:33.133761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:05:33.133767 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:33.133774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:05:33.133778 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:33.133782 | orchestrator | 2026-01-07 01:05:33.133786 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-01-07 01:05:33.133789 | orchestrator | Wednesday 07 January 2026 01:04:58 +0000 (0:00:00.845) 0:00:32.968 ***** 2026-01-07 01:05:33.133793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:05:33.133797 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:33.133803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:05:33.133807 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:33.133814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:05:33.133818 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:33.133822 | orchestrator | 2026-01-07 01:05:33.133875 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-01-07 01:05:33.133879 | orchestrator | Wednesday 07 January 2026 01:04:59 +0000 (0:00:00.730) 0:00:33.699 ***** 2026-01-07 01:05:33.133887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:33.133892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:33.133899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:33.133907 | orchestrator | 2026-01-07 01:05:33.133911 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-01-07 01:05:33.133916 | orchestrator | Wednesday 07 January 2026 01:05:00 +0000 (0:00:01.375) 0:00:35.074 ***** 2026-01-07 01:05:33.133921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:33.133926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:33.133933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:33.133938 | orchestrator | 2026-01-07 01:05:33.133943 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-01-07 01:05:33.133947 | orchestrator | Wednesday 07 January 2026 01:05:03 +0000 (0:00:02.409) 0:00:37.484 ***** 2026-01-07 01:05:33.133952 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-07 01:05:33.133956 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-07 01:05:33.133961 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-07 01:05:33.133965 | orchestrator | 2026-01-07 01:05:33.133970 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-01-07 01:05:33.133976 | orchestrator | Wednesday 07 January 2026 01:05:04 +0000 (0:00:01.345) 0:00:38.829 ***** 2026-01-07 01:05:33.133981 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:33.133986 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:33.133996 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:33.134000 | orchestrator | 2026-01-07 01:05:33.134005 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-01-07 01:05:33.134010 | orchestrator | Wednesday 07 January 2026 01:05:05 +0000 (0:00:01.247) 0:00:40.076 ***** 2026-01-07 01:05:33.134047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:05:33.134052 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:05:33.134057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:05:33.134062 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:05:33.134070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-07 01:05:33.134075 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:05:33.134080 | orchestrator | 2026-01-07 01:05:33.134084 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-01-07 01:05:33.134089 | orchestrator | Wednesday 07 January 2026 01:05:06 +0000 (0:00:00.478) 0:00:40.555 ***** 2026-01-07 01:05:33.134095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:33.134104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:33.134109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-07 01:05:33.134114 | orchestrator | 2026-01-07 01:05:33.134118 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-01-07 01:05:33.134123 | orchestrator | Wednesday 07 January 2026 01:05:07 +0000 (0:00:01.023) 0:00:41.578 ***** 2026-01-07 01:05:33.134127 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:33.134131 | orchestrator | 2026-01-07 01:05:33.134136 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-01-07 01:05:33.134141 | orchestrator | Wednesday 07 January 2026 01:05:09 +0000 (0:00:02.401) 0:00:43.979 ***** 2026-01-07 01:05:33.134145 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:33.134149 | orchestrator | 2026-01-07 01:05:33.134154 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-01-07 01:05:33.134158 | orchestrator | Wednesday 07 January 2026 01:05:11 +0000 (0:00:02.275) 0:00:46.255 ***** 2026-01-07 01:05:33.134165 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:33.134169 | orchestrator | 2026-01-07 01:05:33.134183 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-07 01:05:33.134187 | orchestrator | Wednesday 07 January 2026 01:05:25 +0000 (0:00:13.830) 0:01:00.085 ***** 2026-01-07 01:05:33.134192 | orchestrator | 2026-01-07 01:05:33.134196 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-07 01:05:33.134201 | orchestrator | Wednesday 07 January 2026 01:05:25 +0000 (0:00:00.062) 0:01:00.147 ***** 2026-01-07 01:05:33.134205 | orchestrator | 2026-01-07 01:05:33.134209 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-07 01:05:33.134214 | orchestrator | Wednesday 07 January 2026 01:05:25 +0000 (0:00:00.063) 0:01:00.211 ***** 2026-01-07 01:05:33.134218 | orchestrator | 2026-01-07 01:05:33.134226 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-01-07 01:05:33.134230 | orchestrator | Wednesday 07 January 2026 01:05:26 +0000 (0:00:00.068) 0:01:00.279 ***** 2026-01-07 01:05:33.134235 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:05:33.134239 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:05:33.134244 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:05:33.134248 | orchestrator | 2026-01-07 01:05:33.134252 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:05:33.134258 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 01:05:33.134263 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 01:05:33.134268 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 01:05:33.134273 | orchestrator | 2026-01-07 01:05:33.134277 | orchestrator | 2026-01-07 01:05:33.134282 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:05:33.134289 | orchestrator | Wednesday 07 January 2026 01:05:31 +0000 (0:00:05.432) 0:01:05.712 ***** 2026-01-07 01:05:33.134293 | orchestrator | =============================================================================== 2026-01-07 01:05:33.134298 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.83s 2026-01-07 01:05:33.134302 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.46s 2026-01-07 01:05:33.134307 | orchestrator | placement : Restart placement-api container ----------------------------- 5.43s 2026-01-07 01:05:33.134311 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.24s 2026-01-07 01:05:33.134316 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.06s 2026-01-07 01:05:33.134320 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.88s 2026-01-07 01:05:33.134324 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.22s 2026-01-07 01:05:33.134328 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.21s 2026-01-07 01:05:33.134332 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.41s 2026-01-07 01:05:33.134335 | orchestrator | placement : Creating placement databases -------------------------------- 2.40s 2026-01-07 01:05:33.134340 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.28s 2026-01-07 01:05:33.134346 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.78s 2026-01-07 01:05:33.134352 | orchestrator | placement : Copying over config.json files for services ----------------- 1.38s 2026-01-07 01:05:33.134358 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.35s 2026-01-07 01:05:33.134364 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.25s 2026-01-07 01:05:33.134371 | orchestrator | placement : Check placement containers ---------------------------------- 1.02s 2026-01-07 01:05:33.134377 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.93s 2026-01-07 01:05:33.134383 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.85s 2026-01-07 01:05:33.134390 | orchestrator | placement : include_tasks ----------------------------------------------- 0.79s 2026-01-07 01:05:33.134396 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.73s 2026-01-07 01:05:33.134401 | orchestrator | 2026-01-07 01:05:33 | INFO  | Task 8d5a5fe7-c2b9-4157-a897-c3902dfad804 is in state SUCCESS 2026-01-07 01:05:33.135127 | orchestrator | 2026-01-07 01:05:33 | INFO  | Task 3babaa7e-fbfb-4a70-aeed-0aefe743893a is in state STARTED 2026-01-07 01:05:33.137044 | orchestrator | 2026-01-07 01:05:33 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:33.137882 | orchestrator | 2026-01-07 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:36.185228 | orchestrator | 2026-01-07 01:05:36 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:36.185739 | orchestrator | 2026-01-07 01:05:36 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:05:36.187314 | orchestrator | 2026-01-07 01:05:36 | INFO  | Task 3babaa7e-fbfb-4a70-aeed-0aefe743893a is in state STARTED 2026-01-07 01:05:36.189015 | orchestrator | 2026-01-07 01:05:36 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:36.189062 | orchestrator | 2026-01-07 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:39.211922 | orchestrator | 2026-01-07 01:05:39 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:39.212708 | orchestrator | 2026-01-07 01:05:39 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:05:39.213478 | orchestrator | 2026-01-07 01:05:39 | INFO  | Task 3babaa7e-fbfb-4a70-aeed-0aefe743893a is in state SUCCESS 2026-01-07 01:05:39.216112 | orchestrator | 2026-01-07 01:05:39 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:39.216956 | orchestrator | 2026-01-07 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:42.265652 | orchestrator | 2026-01-07 01:05:42 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:42.267776 | orchestrator | 2026-01-07 01:05:42 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:05:42.269218 | orchestrator | 2026-01-07 01:05:42 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:42.270615 | orchestrator | 2026-01-07 01:05:42 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:05:42.270649 | orchestrator | 2026-01-07 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:45.311648 | orchestrator | 2026-01-07 01:05:45 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:45.314096 | orchestrator | 2026-01-07 01:05:45 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:05:45.315742 | orchestrator | 2026-01-07 01:05:45 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:45.317654 | orchestrator | 2026-01-07 01:05:45 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:05:45.317956 | orchestrator | 2026-01-07 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:48.365128 | orchestrator | 2026-01-07 01:05:48 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:48.365932 | orchestrator | 2026-01-07 01:05:48 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:05:48.367721 | orchestrator | 2026-01-07 01:05:48 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:48.370189 | orchestrator | 2026-01-07 01:05:48 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:05:48.370227 | orchestrator | 2026-01-07 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:51.410686 | orchestrator | 2026-01-07 01:05:51 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:51.416775 | orchestrator | 2026-01-07 01:05:51 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:05:51.418678 | orchestrator | 2026-01-07 01:05:51 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:51.420477 | orchestrator | 2026-01-07 01:05:51 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:05:51.420503 | orchestrator | 2026-01-07 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:54.476234 | orchestrator | 2026-01-07 01:05:54 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:54.478963 | orchestrator | 2026-01-07 01:05:54 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:05:54.479561 | orchestrator | 2026-01-07 01:05:54 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:54.482212 | orchestrator | 2026-01-07 01:05:54 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:05:54.482959 | orchestrator | 2026-01-07 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:05:57.511813 | orchestrator | 2026-01-07 01:05:57 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:05:57.512990 | orchestrator | 2026-01-07 01:05:57 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:05:57.514588 | orchestrator | 2026-01-07 01:05:57 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:05:57.515791 | orchestrator | 2026-01-07 01:05:57 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:05:57.515826 | orchestrator | 2026-01-07 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:00.563906 | orchestrator | 2026-01-07 01:06:00 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:00.567967 | orchestrator | 2026-01-07 01:06:00 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:00.569876 | orchestrator | 2026-01-07 01:06:00 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:06:00.572135 | orchestrator | 2026-01-07 01:06:00 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:00.572523 | orchestrator | 2026-01-07 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:03.615537 | orchestrator | 2026-01-07 01:06:03 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:03.616057 | orchestrator | 2026-01-07 01:06:03 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:03.617899 | orchestrator | 2026-01-07 01:06:03 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:06:03.620193 | orchestrator | 2026-01-07 01:06:03 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:03.620239 | orchestrator | 2026-01-07 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:06.655578 | orchestrator | 2026-01-07 01:06:06 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:06.657320 | orchestrator | 2026-01-07 01:06:06 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:06.660647 | orchestrator | 2026-01-07 01:06:06 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:06:06.662891 | orchestrator | 2026-01-07 01:06:06 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:06.662956 | orchestrator | 2026-01-07 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:09.687689 | orchestrator | 2026-01-07 01:06:09 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:09.692384 | orchestrator | 2026-01-07 01:06:09 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:09.692432 | orchestrator | 2026-01-07 01:06:09 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:06:09.693212 | orchestrator | 2026-01-07 01:06:09 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:09.693252 | orchestrator | 2026-01-07 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:12.720737 | orchestrator | 2026-01-07 01:06:12 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:12.722830 | orchestrator | 2026-01-07 01:06:12 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:12.723158 | orchestrator | 2026-01-07 01:06:12 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:06:12.723685 | orchestrator | 2026-01-07 01:06:12 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:12.723702 | orchestrator | 2026-01-07 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:15.748008 | orchestrator | 2026-01-07 01:06:15 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:15.748640 | orchestrator | 2026-01-07 01:06:15 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:15.749926 | orchestrator | 2026-01-07 01:06:15 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:06:15.751220 | orchestrator | 2026-01-07 01:06:15 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:15.751268 | orchestrator | 2026-01-07 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:18.826852 | orchestrator | 2026-01-07 01:06:18 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:18.826919 | orchestrator | 2026-01-07 01:06:18 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:18.826928 | orchestrator | 2026-01-07 01:06:18 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:06:18.826933 | orchestrator | 2026-01-07 01:06:18 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:18.826939 | orchestrator | 2026-01-07 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:21.820265 | orchestrator | 2026-01-07 01:06:21 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:21.820661 | orchestrator | 2026-01-07 01:06:21 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:21.823375 | orchestrator | 2026-01-07 01:06:21 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:06:21.824146 | orchestrator | 2026-01-07 01:06:21 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:21.824176 | orchestrator | 2026-01-07 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:24.878474 | orchestrator | 2026-01-07 01:06:24 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:24.880714 | orchestrator | 2026-01-07 01:06:24 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:24.882930 | orchestrator | 2026-01-07 01:06:24 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state STARTED 2026-01-07 01:06:24.884645 | orchestrator | 2026-01-07 01:06:24 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:24.884896 | orchestrator | 2026-01-07 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:27.940026 | orchestrator | 2026-01-07 01:06:27 | INFO  | Task d5a8574d-456e-4dc4-ab99-ccd2f125f8f4 is in state STARTED 2026-01-07 01:06:27.942437 | orchestrator | 2026-01-07 01:06:27 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:27.944817 | orchestrator | 2026-01-07 01:06:27 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:27.946935 | orchestrator | 2026-01-07 01:06:27 | INFO  | Task 2f1ab540-b6b9-46c9-9c16-0085c041e8db is in state SUCCESS 2026-01-07 01:06:27.947348 | orchestrator | 2026-01-07 01:06:27.947374 | orchestrator | 2026-01-07 01:06:27.947381 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:06:27.947388 | orchestrator | 2026-01-07 01:06:27.947395 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:06:27.947401 | orchestrator | Wednesday 07 January 2026 01:05:36 +0000 (0:00:00.200) 0:00:00.200 ***** 2026-01-07 01:06:27.947407 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:06:27.947414 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:06:27.947420 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:06:27.947427 | orchestrator | 2026-01-07 01:06:27.947433 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:06:27.947440 | orchestrator | Wednesday 07 January 2026 01:05:36 +0000 (0:00:00.534) 0:00:00.734 ***** 2026-01-07 01:06:27.947446 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-07 01:06:27.947452 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-07 01:06:27.947458 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-07 01:06:27.947465 | orchestrator | 2026-01-07 01:06:27.947471 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-01-07 01:06:27.947477 | orchestrator | 2026-01-07 01:06:27.947483 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-01-07 01:06:27.947489 | orchestrator | Wednesday 07 January 2026 01:05:37 +0000 (0:00:00.758) 0:00:01.493 ***** 2026-01-07 01:06:27.947495 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:06:27.947502 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:06:27.947508 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:06:27.947514 | orchestrator | 2026-01-07 01:06:27.947520 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:06:27.947529 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:06:27.947542 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:06:27.947553 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:06:27.947565 | orchestrator | 2026-01-07 01:06:27.947577 | orchestrator | 2026-01-07 01:06:27.947588 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:06:27.947599 | orchestrator | Wednesday 07 January 2026 01:05:38 +0000 (0:00:00.686) 0:00:02.179 ***** 2026-01-07 01:06:27.947606 | orchestrator | =============================================================================== 2026-01-07 01:06:27.947612 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.76s 2026-01-07 01:06:27.947618 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.69s 2026-01-07 01:06:27.947624 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.53s 2026-01-07 01:06:27.947630 | orchestrator | 2026-01-07 01:06:27.949390 | orchestrator | 2026-01-07 01:06:27.949519 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:06:27.949536 | orchestrator | 2026-01-07 01:06:27.949548 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:06:27.949636 | orchestrator | Wednesday 07 January 2026 01:02:22 +0000 (0:00:00.472) 0:00:00.473 ***** 2026-01-07 01:06:27.949650 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:06:27.949662 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:06:27.949681 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:06:27.949692 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:06:27.949703 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:06:27.949714 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:06:27.949725 | orchestrator | 2026-01-07 01:06:27.949736 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:06:27.949747 | orchestrator | Wednesday 07 January 2026 01:02:23 +0000 (0:00:00.962) 0:00:01.435 ***** 2026-01-07 01:06:27.949756 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-07 01:06:27.949763 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-07 01:06:27.949769 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-07 01:06:27.949775 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-07 01:06:27.949781 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-07 01:06:27.949787 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-07 01:06:27.949794 | orchestrator | 2026-01-07 01:06:27.949800 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-07 01:06:27.949806 | orchestrator | 2026-01-07 01:06:27.949812 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-07 01:06:27.949818 | orchestrator | Wednesday 07 January 2026 01:02:24 +0000 (0:00:00.598) 0:00:02.034 ***** 2026-01-07 01:06:27.949825 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:06:27.949832 | orchestrator | 2026-01-07 01:06:27.949838 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-07 01:06:27.949844 | orchestrator | Wednesday 07 January 2026 01:02:25 +0000 (0:00:00.935) 0:00:02.969 ***** 2026-01-07 01:06:27.949850 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:06:27.950429 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:06:27.950459 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:06:27.950470 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:06:27.950479 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:06:27.950486 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:06:27.950492 | orchestrator | 2026-01-07 01:06:27.950498 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-07 01:06:27.950505 | orchestrator | Wednesday 07 January 2026 01:02:26 +0000 (0:00:01.057) 0:00:04.026 ***** 2026-01-07 01:06:27.950520 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:06:27.950527 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:06:27.950533 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:06:27.950539 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:06:27.950546 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:06:27.950552 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:06:27.950558 | orchestrator | 2026-01-07 01:06:27.950564 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-07 01:06:27.950570 | orchestrator | Wednesday 07 January 2026 01:02:27 +0000 (0:00:01.023) 0:00:05.049 ***** 2026-01-07 01:06:27.950577 | orchestrator | ok: [testbed-node-0] => { 2026-01-07 01:06:27.950584 | orchestrator |  "changed": false, 2026-01-07 01:06:27.950593 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:06:27.950604 | orchestrator | } 2026-01-07 01:06:27.950614 | orchestrator | ok: [testbed-node-1] => { 2026-01-07 01:06:27.950625 | orchestrator |  "changed": false, 2026-01-07 01:06:27.950635 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:06:27.950644 | orchestrator | } 2026-01-07 01:06:27.950655 | orchestrator | ok: [testbed-node-2] => { 2026-01-07 01:06:27.950666 | orchestrator |  "changed": false, 2026-01-07 01:06:27.950677 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:06:27.950687 | orchestrator | } 2026-01-07 01:06:27.950713 | orchestrator | ok: [testbed-node-3] => { 2026-01-07 01:06:27.950736 | orchestrator |  "changed": false, 2026-01-07 01:06:27.950743 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:06:27.950749 | orchestrator | } 2026-01-07 01:06:27.950755 | orchestrator | ok: [testbed-node-4] => { 2026-01-07 01:06:27.950762 | orchestrator |  "changed": false, 2026-01-07 01:06:27.950768 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:06:27.950774 | orchestrator | } 2026-01-07 01:06:27.950780 | orchestrator | ok: [testbed-node-5] => { 2026-01-07 01:06:27.950786 | orchestrator |  "changed": false, 2026-01-07 01:06:27.950796 | orchestrator |  "msg": "All assertions passed" 2026-01-07 01:06:27.950812 | orchestrator | } 2026-01-07 01:06:27.950824 | orchestrator | 2026-01-07 01:06:27.950833 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-07 01:06:27.950844 | orchestrator | Wednesday 07 January 2026 01:02:27 +0000 (0:00:00.655) 0:00:05.705 ***** 2026-01-07 01:06:27.950853 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.950863 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.950874 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.950884 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.950895 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.950905 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.950914 | orchestrator | 2026-01-07 01:06:27.950924 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-01-07 01:06:27.950935 | orchestrator | Wednesday 07 January 2026 01:02:28 +0000 (0:00:00.528) 0:00:06.234 ***** 2026-01-07 01:06:27.950945 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-01-07 01:06:27.950957 | orchestrator | 2026-01-07 01:06:27.950968 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-01-07 01:06:27.950979 | orchestrator | Wednesday 07 January 2026 01:02:31 +0000 (0:00:03.623) 0:00:09.858 ***** 2026-01-07 01:06:27.950990 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-01-07 01:06:27.951002 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-01-07 01:06:27.951013 | orchestrator | 2026-01-07 01:06:27.951068 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-01-07 01:06:27.951083 | orchestrator | Wednesday 07 January 2026 01:02:37 +0000 (0:00:05.985) 0:00:15.843 ***** 2026-01-07 01:06:27.951094 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:06:27.951106 | orchestrator | 2026-01-07 01:06:27.951117 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-01-07 01:06:27.951128 | orchestrator | Wednesday 07 January 2026 01:02:41 +0000 (0:00:03.071) 0:00:18.914 ***** 2026-01-07 01:06:27.951139 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:06:27.951150 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-01-07 01:06:27.951161 | orchestrator | 2026-01-07 01:06:27.951172 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-01-07 01:06:27.951183 | orchestrator | Wednesday 07 January 2026 01:02:44 +0000 (0:00:03.572) 0:00:22.486 ***** 2026-01-07 01:06:27.951193 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:06:27.951204 | orchestrator | 2026-01-07 01:06:27.951214 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-01-07 01:06:27.951226 | orchestrator | Wednesday 07 January 2026 01:02:47 +0000 (0:00:03.185) 0:00:25.672 ***** 2026-01-07 01:06:27.951238 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-01-07 01:06:27.951299 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-01-07 01:06:27.951311 | orchestrator | 2026-01-07 01:06:27.951322 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-07 01:06:27.951333 | orchestrator | Wednesday 07 January 2026 01:02:55 +0000 (0:00:07.285) 0:00:32.957 ***** 2026-01-07 01:06:27.951345 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.951364 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.951375 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.951385 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.951395 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.951405 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.951416 | orchestrator | 2026-01-07 01:06:27.951427 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-01-07 01:06:27.951438 | orchestrator | Wednesday 07 January 2026 01:02:55 +0000 (0:00:00.709) 0:00:33.667 ***** 2026-01-07 01:06:27.951449 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.951459 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.951469 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.951480 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.951490 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.951500 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.951510 | orchestrator | 2026-01-07 01:06:27.951521 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-01-07 01:06:27.951532 | orchestrator | Wednesday 07 January 2026 01:02:57 +0000 (0:00:02.174) 0:00:35.841 ***** 2026-01-07 01:06:27.951549 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:06:27.951560 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:06:27.951570 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:06:27.951581 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:06:27.951591 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:06:27.951602 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:06:27.951613 | orchestrator | 2026-01-07 01:06:27.951624 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-07 01:06:27.951634 | orchestrator | Wednesday 07 January 2026 01:02:59 +0000 (0:00:01.099) 0:00:36.940 ***** 2026-01-07 01:06:27.951645 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.951655 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.951666 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.951677 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.951687 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.951698 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.951708 | orchestrator | 2026-01-07 01:06:27.951719 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-01-07 01:06:27.951730 | orchestrator | Wednesday 07 January 2026 01:03:01 +0000 (0:00:02.747) 0:00:39.688 ***** 2026-01-07 01:06:27.951743 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:06:27.951790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.951811 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:06:27.951823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.951840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.951852 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:06:27.951863 | orchestrator | 2026-01-07 01:06:27.951873 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-01-07 01:06:27.951884 | orchestrator | Wednesday 07 January 2026 01:03:05 +0000 (0:00:03.217) 0:00:42.905 ***** 2026-01-07 01:06:27.951894 | orchestrator | [WARNING]: Skipped 2026-01-07 01:06:27.951905 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-01-07 01:06:27.951916 | orchestrator | due to this access issue: 2026-01-07 01:06:27.951933 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-01-07 01:06:27.951940 | orchestrator | a directory 2026-01-07 01:06:27.951947 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:06:27.951953 | orchestrator | 2026-01-07 01:06:27.951978 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-07 01:06:27.951986 | orchestrator | Wednesday 07 January 2026 01:03:05 +0000 (0:00:00.764) 0:00:43.669 ***** 2026-01-07 01:06:27.951993 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:06:27.952000 | orchestrator | 2026-01-07 01:06:27.952007 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-01-07 01:06:27.952013 | orchestrator | Wednesday 07 January 2026 01:03:06 +0000 (0:00:01.102) 0:00:44.772 ***** 2026-01-07 01:06:27.952020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.952034 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:06:27.952041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.952047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.952075 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:06:27.952083 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:06:27.952090 | orchestrator | 2026-01-07 01:06:27.952096 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-01-07 01:06:27.952102 | orchestrator | Wednesday 07 January 2026 01:03:09 +0000 (0:00:02.835) 0:00:47.607 ***** 2026-01-07 01:06:27.952112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.952118 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.952125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.952135 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.952157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.952164 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.952171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.952177 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.952184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.952190 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.952199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.952206 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.952212 | orchestrator | 2026-01-07 01:06:27.952218 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-01-07 01:06:27.952224 | orchestrator | Wednesday 07 January 2026 01:03:13 +0000 (0:00:03.570) 0:00:51.177 ***** 2026-01-07 01:06:27.952231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.952241 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.952294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.952308 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.952319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.952328 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.952337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.952344 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.952350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.952361 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.952367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.952374 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.952381 | orchestrator | 2026-01-07 01:06:27.952387 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-01-07 01:06:27.952397 | orchestrator | Wednesday 07 January 2026 01:03:15 +0000 (0:00:02.643) 0:00:53.821 ***** 2026-01-07 01:06:27.952515 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.952526 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.952533 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.952539 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.952545 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.952551 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.952557 | orchestrator | 2026-01-07 01:06:27.952563 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-01-07 01:06:27.952570 | orchestrator | Wednesday 07 January 2026 01:03:18 +0000 (0:00:02.341) 0:00:56.163 ***** 2026-01-07 01:06:27.952576 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.952582 | orchestrator | 2026-01-07 01:06:27.952588 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-01-07 01:06:27.952594 | orchestrator | Wednesday 07 January 2026 01:03:18 +0000 (0:00:00.089) 0:00:56.252 ***** 2026-01-07 01:06:27.952601 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.952607 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.952613 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.952620 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.952626 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.952632 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.952639 | orchestrator | 2026-01-07 01:06:27.952651 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-01-07 01:06:27.952666 | orchestrator | Wednesday 07 January 2026 01:03:18 +0000 (0:00:00.563) 0:00:56.816 ***** 2026-01-07 01:06:27.952684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.952703 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.952715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.952727 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.952740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.952751 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.952772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.952784 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.952795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.952806 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.952821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.952839 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.952849 | orchestrator | 2026-01-07 01:06:27.952861 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-01-07 01:06:27.952871 | orchestrator | Wednesday 07 January 2026 01:03:21 +0000 (0:00:02.863) 0:00:59.679 ***** 2026-01-07 01:06:27.952883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.952900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.952912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.952924 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:06:27.952945 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:06:27.952957 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:06:27.952968 | orchestrator | 2026-01-07 01:06:27.952980 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-01-07 01:06:27.952991 | orchestrator | Wednesday 07 January 2026 01:03:26 +0000 (0:00:04.887) 0:01:04.567 ***** 2026-01-07 01:06:27.953008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.953020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.953044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.953058 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:06:27.953070 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:06:27.953087 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:06:27.953099 | orchestrator | 2026-01-07 01:06:27.953111 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-01-07 01:06:27.953122 | orchestrator | Wednesday 07 January 2026 01:03:32 +0000 (0:00:05.905) 0:01:10.472 ***** 2026-01-07 01:06:27.953134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.953152 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.953170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.953182 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.953194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.953207 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.953219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.953233 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.953267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.953287 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.953300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.953312 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.953322 | orchestrator | 2026-01-07 01:06:27.953333 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-01-07 01:06:27.953346 | orchestrator | Wednesday 07 January 2026 01:03:35 +0000 (0:00:02.646) 0:01:13.119 ***** 2026-01-07 01:06:27.953358 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.953370 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.953383 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.953396 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:27.953411 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:06:27.953428 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:06:27.953440 | orchestrator | 2026-01-07 01:06:27.953450 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-01-07 01:06:27.953460 | orchestrator | Wednesday 07 January 2026 01:03:37 +0000 (0:00:02.661) 0:01:15.780 ***** 2026-01-07 01:06:27.953470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.953481 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.953492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.953502 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.953520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.953539 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.953550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.953565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.953575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.953586 | orchestrator | 2026-01-07 01:06:27.953597 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-01-07 01:06:27.953608 | orchestrator | Wednesday 07 January 2026 01:03:41 +0000 (0:00:03.533) 0:01:19.314 ***** 2026-01-07 01:06:27.953618 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.953628 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.953639 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.953650 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.953660 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.953671 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.953681 | orchestrator | 2026-01-07 01:06:27.953691 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-01-07 01:06:27.953701 | orchestrator | Wednesday 07 January 2026 01:03:43 +0000 (0:00:02.139) 0:01:21.454 ***** 2026-01-07 01:06:27.953718 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.953728 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.953739 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.953750 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.953761 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.953772 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.953783 | orchestrator | 2026-01-07 01:06:27.953793 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-01-07 01:06:27.953803 | orchestrator | Wednesday 07 January 2026 01:03:46 +0000 (0:00:02.992) 0:01:24.446 ***** 2026-01-07 01:06:27.953818 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.953829 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.953839 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.953850 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.953861 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.953872 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.953883 | orchestrator | 2026-01-07 01:06:27.953892 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-01-07 01:06:27.953902 | orchestrator | Wednesday 07 January 2026 01:03:48 +0000 (0:00:02.091) 0:01:26.537 ***** 2026-01-07 01:06:27.953913 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.953942 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.953953 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.953963 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.953983 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.953994 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.954004 | orchestrator | 2026-01-07 01:06:27.954056 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-01-07 01:06:27.954071 | orchestrator | Wednesday 07 January 2026 01:03:50 +0000 (0:00:01.868) 0:01:28.406 ***** 2026-01-07 01:06:27.954083 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.954095 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.954106 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.954117 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.954128 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.954140 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.954151 | orchestrator | 2026-01-07 01:06:27.954162 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-01-07 01:06:27.954174 | orchestrator | Wednesday 07 January 2026 01:03:53 +0000 (0:00:02.514) 0:01:30.920 ***** 2026-01-07 01:06:27.954185 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.954197 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.954208 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.954220 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.954230 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.954241 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.954274 | orchestrator | 2026-01-07 01:06:27.954285 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-01-07 01:06:27.954295 | orchestrator | Wednesday 07 January 2026 01:03:55 +0000 (0:00:02.433) 0:01:33.354 ***** 2026-01-07 01:06:27.954305 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:06:27.954316 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.954327 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:06:27.954338 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.954349 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:06:27.954365 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.954376 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:06:27.954387 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.954406 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:06:27.954417 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.954428 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-07 01:06:27.954439 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.954450 | orchestrator | 2026-01-07 01:06:27.954460 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-01-07 01:06:27.954471 | orchestrator | Wednesday 07 January 2026 01:03:57 +0000 (0:00:01.737) 0:01:35.091 ***** 2026-01-07 01:06:27.954483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.954494 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.954515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.954526 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.954548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.954559 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.954575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.954594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.954605 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.954616 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.954627 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.954638 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.954649 | orchestrator | 2026-01-07 01:06:27.954659 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-01-07 01:06:27.954671 | orchestrator | Wednesday 07 January 2026 01:03:59 +0000 (0:00:01.957) 0:01:37.049 ***** 2026-01-07 01:06:27.954690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.954701 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.954712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.954729 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.954744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.954756 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.954767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.954778 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.954795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.954807 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.954818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.954829 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.954838 | orchestrator | 2026-01-07 01:06:27.954844 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-01-07 01:06:27.954851 | orchestrator | Wednesday 07 January 2026 01:04:00 +0000 (0:00:01.802) 0:01:38.851 ***** 2026-01-07 01:06:27.954861 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.954867 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.954874 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.954880 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.954886 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.954897 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.954911 | orchestrator | 2026-01-07 01:06:27.954923 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-01-07 01:06:27.954934 | orchestrator | Wednesday 07 January 2026 01:04:02 +0000 (0:00:01.711) 0:01:40.563 ***** 2026-01-07 01:06:27.954944 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.954954 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.954965 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.954974 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:06:27.954984 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:06:27.954993 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:06:27.955003 | orchestrator | 2026-01-07 01:06:27.955012 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-01-07 01:06:27.955022 | orchestrator | Wednesday 07 January 2026 01:04:06 +0000 (0:00:04.202) 0:01:44.766 ***** 2026-01-07 01:06:27.955032 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.955042 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.955063 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.955074 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.955084 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.955094 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.955104 | orchestrator | 2026-01-07 01:06:27.955114 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-01-07 01:06:27.955121 | orchestrator | Wednesday 07 January 2026 01:04:08 +0000 (0:00:01.814) 0:01:46.581 ***** 2026-01-07 01:06:27.955128 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.955134 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.955140 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.955146 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.955152 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.955158 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.955165 | orchestrator | 2026-01-07 01:06:27.955171 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-01-07 01:06:27.955177 | orchestrator | Wednesday 07 January 2026 01:04:10 +0000 (0:00:01.965) 0:01:48.546 ***** 2026-01-07 01:06:27.955183 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.955189 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.955195 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.955201 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.955208 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.955214 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.955220 | orchestrator | 2026-01-07 01:06:27.955226 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-01-07 01:06:27.955232 | orchestrator | Wednesday 07 January 2026 01:04:13 +0000 (0:00:02.748) 0:01:51.294 ***** 2026-01-07 01:06:27.955238 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.955358 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.955380 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.955410 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.955417 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.955423 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.955429 | orchestrator | 2026-01-07 01:06:27.955436 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-01-07 01:06:27.955443 | orchestrator | Wednesday 07 January 2026 01:04:16 +0000 (0:00:02.641) 0:01:53.935 ***** 2026-01-07 01:06:27.955449 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.955455 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.955461 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.955478 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.955485 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.955491 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.955497 | orchestrator | 2026-01-07 01:06:27.955503 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-01-07 01:06:27.955510 | orchestrator | Wednesday 07 January 2026 01:04:17 +0000 (0:00:01.746) 0:01:55.682 ***** 2026-01-07 01:06:27.955516 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.955523 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.955529 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.955535 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.955541 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.955547 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.955554 | orchestrator | 2026-01-07 01:06:27.955560 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-01-07 01:06:27.955575 | orchestrator | Wednesday 07 January 2026 01:04:20 +0000 (0:00:02.512) 0:01:58.194 ***** 2026-01-07 01:06:27.955582 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.955588 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.955595 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.955601 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.955607 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.955613 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.955664 | orchestrator | 2026-01-07 01:06:27.955671 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-01-07 01:06:27.955678 | orchestrator | Wednesday 07 January 2026 01:04:22 +0000 (0:00:02.593) 0:02:00.788 ***** 2026-01-07 01:06:27.955684 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:06:27.955691 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.955697 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:06:27.955703 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.955710 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:06:27.955722 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.955729 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:06:27.955734 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.955741 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:06:27.955747 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.955752 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-07 01:06:27.955758 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.955764 | orchestrator | 2026-01-07 01:06:27.955770 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-01-07 01:06:27.955776 | orchestrator | Wednesday 07 January 2026 01:04:25 +0000 (0:00:02.099) 0:02:02.888 ***** 2026-01-07 01:06:27.955787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.955799 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.955805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.955814 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.955827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-07 01:06:27.955833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.955840 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.955845 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.955863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.955869 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.955878 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-07 01:06:27.955888 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.955894 | orchestrator | 2026-01-07 01:06:27.955900 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-01-07 01:06:27.955906 | orchestrator | Wednesday 07 January 2026 01:04:26 +0000 (0:00:01.596) 0:02:04.484 ***** 2026-01-07 01:06:27.955912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.955924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:06:27.955931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.955940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-07 01:06:27.955950 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:06:27.955957 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-07 01:06:27.955963 | orchestrator | 2026-01-07 01:06:27.955969 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-07 01:06:27.955975 | orchestrator | Wednesday 07 January 2026 01:04:28 +0000 (0:00:02.360) 0:02:06.845 ***** 2026-01-07 01:06:27.955981 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:27.955987 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:27.955992 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:27.955998 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:06:27.956004 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:06:27.956013 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:06:27.956019 | orchestrator | 2026-01-07 01:06:27.956025 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-01-07 01:06:27.956030 | orchestrator | Wednesday 07 January 2026 01:04:29 +0000 (0:00:00.521) 0:02:07.366 ***** 2026-01-07 01:06:27.956036 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:27.956042 | orchestrator | 2026-01-07 01:06:27.956048 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-01-07 01:06:27.956054 | orchestrator | Wednesday 07 January 2026 01:04:31 +0000 (0:00:01.944) 0:02:09.311 ***** 2026-01-07 01:06:27.956060 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:27.956066 | orchestrator | 2026-01-07 01:06:27.956072 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-01-07 01:06:27.956078 | orchestrator | Wednesday 07 January 2026 01:04:34 +0000 (0:00:02.873) 0:02:12.184 ***** 2026-01-07 01:06:27.956083 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:27.956089 | orchestrator | 2026-01-07 01:06:27.956095 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:06:27.956101 | orchestrator | Wednesday 07 January 2026 01:05:12 +0000 (0:00:38.544) 0:02:50.729 ***** 2026-01-07 01:06:27.956106 | orchestrator | 2026-01-07 01:06:27.956112 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:06:27.956135 | orchestrator | Wednesday 07 January 2026 01:05:12 +0000 (0:00:00.064) 0:02:50.793 ***** 2026-01-07 01:06:27.956146 | orchestrator | 2026-01-07 01:06:27.956152 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:06:27.956158 | orchestrator | Wednesday 07 January 2026 01:05:13 +0000 (0:00:00.323) 0:02:51.117 ***** 2026-01-07 01:06:27.956163 | orchestrator | 2026-01-07 01:06:27.956169 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:06:27.956175 | orchestrator | Wednesday 07 January 2026 01:05:13 +0000 (0:00:00.067) 0:02:51.184 ***** 2026-01-07 01:06:27.956181 | orchestrator | 2026-01-07 01:06:27.956186 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:06:27.956192 | orchestrator | Wednesday 07 January 2026 01:05:13 +0000 (0:00:00.065) 0:02:51.250 ***** 2026-01-07 01:06:27.956198 | orchestrator | 2026-01-07 01:06:27.956204 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-07 01:06:27.956210 | orchestrator | Wednesday 07 January 2026 01:05:13 +0000 (0:00:00.067) 0:02:51.318 ***** 2026-01-07 01:06:27.956219 | orchestrator | 2026-01-07 01:06:27.956229 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-01-07 01:06:27.956235 | orchestrator | Wednesday 07 January 2026 01:05:13 +0000 (0:00:00.066) 0:02:51.384 ***** 2026-01-07 01:06:27.956240 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:27.956259 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:06:27.956265 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:06:27.956271 | orchestrator | 2026-01-07 01:06:27.956280 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-01-07 01:06:27.956286 | orchestrator | Wednesday 07 January 2026 01:05:33 +0000 (0:00:20.030) 0:03:11.415 ***** 2026-01-07 01:06:27.956292 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:06:27.956298 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:06:27.956303 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:06:27.956309 | orchestrator | 2026-01-07 01:06:27.956315 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:06:27.956322 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 01:06:27.956328 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-07 01:06:27.956334 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-07 01:06:27.956340 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 01:06:27.956346 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 01:06:27.956352 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-07 01:06:27.956358 | orchestrator | 2026-01-07 01:06:27.956363 | orchestrator | 2026-01-07 01:06:27.956369 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:06:27.956375 | orchestrator | Wednesday 07 January 2026 01:06:25 +0000 (0:00:52.129) 0:04:03.545 ***** 2026-01-07 01:06:27.956381 | orchestrator | =============================================================================== 2026-01-07 01:06:27.956386 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 52.13s 2026-01-07 01:06:27.956412 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 38.54s 2026-01-07 01:06:27.956419 | orchestrator | neutron : Restart neutron-server container ----------------------------- 20.03s 2026-01-07 01:06:27.956424 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.29s 2026-01-07 01:06:27.956430 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 5.99s 2026-01-07 01:06:27.956442 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.91s 2026-01-07 01:06:27.956447 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.89s 2026-01-07 01:06:27.956453 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.20s 2026-01-07 01:06:27.956464 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.62s 2026-01-07 01:06:27.956469 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.57s 2026-01-07 01:06:27.956475 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.57s 2026-01-07 01:06:27.956482 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.53s 2026-01-07 01:06:27.956487 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.22s 2026-01-07 01:06:27.956493 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.19s 2026-01-07 01:06:27.956499 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.07s 2026-01-07 01:06:27.956505 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 2.99s 2026-01-07 01:06:27.956510 | orchestrator | neutron : Creating Neutron database user and setting permissions -------- 2.87s 2026-01-07 01:06:27.956516 | orchestrator | neutron : Copying over existing policy file ----------------------------- 2.86s 2026-01-07 01:06:27.956522 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 2.84s 2026-01-07 01:06:27.956528 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 2.75s 2026-01-07 01:06:27.956534 | orchestrator | 2026-01-07 01:06:27 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:27.956540 | orchestrator | 2026-01-07 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:31.008243 | orchestrator | 2026-01-07 01:06:31 | INFO  | Task d5a8574d-456e-4dc4-ab99-ccd2f125f8f4 is in state STARTED 2026-01-07 01:06:31.010059 | orchestrator | 2026-01-07 01:06:31 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:31.012198 | orchestrator | 2026-01-07 01:06:31 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:31.014110 | orchestrator | 2026-01-07 01:06:31 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:31.014291 | orchestrator | 2026-01-07 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:34.055974 | orchestrator | 2026-01-07 01:06:34 | INFO  | Task d5a8574d-456e-4dc4-ab99-ccd2f125f8f4 is in state STARTED 2026-01-07 01:06:34.057096 | orchestrator | 2026-01-07 01:06:34 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:34.059682 | orchestrator | 2026-01-07 01:06:34 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:34.063159 | orchestrator | 2026-01-07 01:06:34 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:34.063219 | orchestrator | 2026-01-07 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:37.127725 | orchestrator | 2026-01-07 01:06:37 | INFO  | Task d5a8574d-456e-4dc4-ab99-ccd2f125f8f4 is in state STARTED 2026-01-07 01:06:37.127778 | orchestrator | 2026-01-07 01:06:37 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:37.127783 | orchestrator | 2026-01-07 01:06:37 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:37.127789 | orchestrator | 2026-01-07 01:06:37 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:37.127795 | orchestrator | 2026-01-07 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:40.183603 | orchestrator | 2026-01-07 01:06:40 | INFO  | Task d5a8574d-456e-4dc4-ab99-ccd2f125f8f4 is in state STARTED 2026-01-07 01:06:40.185425 | orchestrator | 2026-01-07 01:06:40 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:40.185480 | orchestrator | 2026-01-07 01:06:40 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:40.185490 | orchestrator | 2026-01-07 01:06:40 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:40.186381 | orchestrator | 2026-01-07 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:43.215616 | orchestrator | 2026-01-07 01:06:43 | INFO  | Task d5a8574d-456e-4dc4-ab99-ccd2f125f8f4 is in state STARTED 2026-01-07 01:06:43.216293 | orchestrator | 2026-01-07 01:06:43 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:43.217311 | orchestrator | 2026-01-07 01:06:43 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:43.218404 | orchestrator | 2026-01-07 01:06:43 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:43.218452 | orchestrator | 2026-01-07 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:46.259735 | orchestrator | 2026-01-07 01:06:46 | INFO  | Task d5a8574d-456e-4dc4-ab99-ccd2f125f8f4 is in state STARTED 2026-01-07 01:06:46.259797 | orchestrator | 2026-01-07 01:06:46 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:46.259804 | orchestrator | 2026-01-07 01:06:46 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:46.261849 | orchestrator | 2026-01-07 01:06:46 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:46.261909 | orchestrator | 2026-01-07 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:49.309998 | orchestrator | 2026-01-07 01:06:49 | INFO  | Task d5a8574d-456e-4dc4-ab99-ccd2f125f8f4 is in state STARTED 2026-01-07 01:06:49.311463 | orchestrator | 2026-01-07 01:06:49 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:49.311941 | orchestrator | 2026-01-07 01:06:49 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:49.314164 | orchestrator | 2026-01-07 01:06:49 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:49.314215 | orchestrator | 2026-01-07 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:52.343488 | orchestrator | 2026-01-07 01:06:52 | INFO  | Task d5a8574d-456e-4dc4-ab99-ccd2f125f8f4 is in state STARTED 2026-01-07 01:06:52.343835 | orchestrator | 2026-01-07 01:06:52 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:52.345205 | orchestrator | 2026-01-07 01:06:52 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:52.347269 | orchestrator | 2026-01-07 01:06:52 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:52.347311 | orchestrator | 2026-01-07 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:55.477644 | orchestrator | 2026-01-07 01:06:55 | INFO  | Task d5a8574d-456e-4dc4-ab99-ccd2f125f8f4 is in state STARTED 2026-01-07 01:06:55.477702 | orchestrator | 2026-01-07 01:06:55 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:55.479995 | orchestrator | 2026-01-07 01:06:55 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state STARTED 2026-01-07 01:06:55.481216 | orchestrator | 2026-01-07 01:06:55 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:55.481778 | orchestrator | 2026-01-07 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:06:58.526977 | orchestrator | 2026-01-07 01:06:58 | INFO  | Task d5a8574d-456e-4dc4-ab99-ccd2f125f8f4 is in state STARTED 2026-01-07 01:06:58.527669 | orchestrator | 2026-01-07 01:06:58 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:06:58.529477 | orchestrator | 2026-01-07 01:06:58 | INFO  | Task b4e0ff7c-4bf4-47b8-bf38-167b09b3bfa7 is in state SUCCESS 2026-01-07 01:06:58.530928 | orchestrator | 2026-01-07 01:06:58.531047 | orchestrator | 2026-01-07 01:06:58.531051 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:06:58.531056 | orchestrator | 2026-01-07 01:06:58.531059 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:06:58.531064 | orchestrator | Wednesday 07 January 2026 01:05:17 +0000 (0:00:00.214) 0:00:00.214 ***** 2026-01-07 01:06:58.531071 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:06:58.531079 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:06:58.531085 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:06:58.531091 | orchestrator | 2026-01-07 01:06:58.531097 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:06:58.531101 | orchestrator | Wednesday 07 January 2026 01:05:17 +0000 (0:00:00.260) 0:00:00.474 ***** 2026-01-07 01:06:58.531104 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-07 01:06:58.531108 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-07 01:06:58.531111 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-07 01:06:58.531114 | orchestrator | 2026-01-07 01:06:58.531117 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-07 01:06:58.531121 | orchestrator | 2026-01-07 01:06:58.531124 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-07 01:06:58.531127 | orchestrator | Wednesday 07 January 2026 01:05:18 +0000 (0:00:00.357) 0:00:00.832 ***** 2026-01-07 01:06:58.531130 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:06:58.531134 | orchestrator | 2026-01-07 01:06:58.531137 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-01-07 01:06:58.531140 | orchestrator | Wednesday 07 January 2026 01:05:18 +0000 (0:00:00.420) 0:00:01.253 ***** 2026-01-07 01:06:58.531144 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-01-07 01:06:58.531147 | orchestrator | 2026-01-07 01:06:58.531150 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-01-07 01:06:58.531153 | orchestrator | Wednesday 07 January 2026 01:05:22 +0000 (0:00:04.226) 0:00:05.479 ***** 2026-01-07 01:06:58.531157 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-01-07 01:06:58.531161 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-01-07 01:06:58.531166 | orchestrator | 2026-01-07 01:06:58.531195 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-01-07 01:06:58.531201 | orchestrator | Wednesday 07 January 2026 01:05:29 +0000 (0:00:06.363) 0:00:11.843 ***** 2026-01-07 01:06:58.531205 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:06:58.531210 | orchestrator | 2026-01-07 01:06:58.531215 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-01-07 01:06:58.531220 | orchestrator | Wednesday 07 January 2026 01:05:32 +0000 (0:00:03.549) 0:00:15.393 ***** 2026-01-07 01:06:58.531225 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:06:58.531230 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-01-07 01:06:58.531235 | orchestrator | 2026-01-07 01:06:58.531239 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-01-07 01:06:58.531257 | orchestrator | Wednesday 07 January 2026 01:05:36 +0000 (0:00:03.899) 0:00:19.293 ***** 2026-01-07 01:06:58.531261 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:06:58.531266 | orchestrator | 2026-01-07 01:06:58.531271 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-01-07 01:06:58.531276 | orchestrator | Wednesday 07 January 2026 01:05:39 +0000 (0:00:03.320) 0:00:22.614 ***** 2026-01-07 01:06:58.531282 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-01-07 01:06:58.531287 | orchestrator | 2026-01-07 01:06:58.531292 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-01-07 01:06:58.531298 | orchestrator | Wednesday 07 January 2026 01:05:43 +0000 (0:00:03.570) 0:00:26.184 ***** 2026-01-07 01:06:58.531301 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:58.531305 | orchestrator | 2026-01-07 01:06:58.531308 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-01-07 01:06:58.531311 | orchestrator | Wednesday 07 January 2026 01:05:46 +0000 (0:00:03.077) 0:00:29.261 ***** 2026-01-07 01:06:58.531314 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:58.531317 | orchestrator | 2026-01-07 01:06:58.531322 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-01-07 01:06:58.531327 | orchestrator | Wednesday 07 January 2026 01:05:49 +0000 (0:00:03.232) 0:00:32.493 ***** 2026-01-07 01:06:58.531332 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:58.531337 | orchestrator | 2026-01-07 01:06:58.531342 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-01-07 01:06:58.531347 | orchestrator | Wednesday 07 January 2026 01:05:52 +0000 (0:00:02.903) 0:00:35.397 ***** 2026-01-07 01:06:58.531367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531409 | orchestrator | 2026-01-07 01:06:58.531414 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-01-07 01:06:58.531419 | orchestrator | Wednesday 07 January 2026 01:05:54 +0000 (0:00:01.713) 0:00:37.111 ***** 2026-01-07 01:06:58.531423 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:58.531428 | orchestrator | 2026-01-07 01:06:58.531432 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-01-07 01:06:58.531437 | orchestrator | Wednesday 07 January 2026 01:05:54 +0000 (0:00:00.153) 0:00:37.264 ***** 2026-01-07 01:06:58.531442 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:58.531447 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:58.531451 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:58.531482 | orchestrator | 2026-01-07 01:06:58.531488 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-01-07 01:06:58.531494 | orchestrator | Wednesday 07 January 2026 01:05:55 +0000 (0:00:00.572) 0:00:37.836 ***** 2026-01-07 01:06:58.531499 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:06:58.531503 | orchestrator | 2026-01-07 01:06:58.531507 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-01-07 01:06:58.531510 | orchestrator | Wednesday 07 January 2026 01:05:56 +0000 (0:00:00.897) 0:00:38.734 ***** 2026-01-07 01:06:58.531517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531546 | orchestrator | 2026-01-07 01:06:58.531549 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-01-07 01:06:58.531552 | orchestrator | Wednesday 07 January 2026 01:05:58 +0000 (0:00:02.334) 0:00:41.069 ***** 2026-01-07 01:06:58.531556 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:06:58.531559 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:06:58.531562 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:06:58.531565 | orchestrator | 2026-01-07 01:06:58.531568 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-07 01:06:58.531571 | orchestrator | Wednesday 07 January 2026 01:05:58 +0000 (0:00:00.338) 0:00:41.407 ***** 2026-01-07 01:06:58.531575 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:06:58.531578 | orchestrator | 2026-01-07 01:06:58.531581 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-01-07 01:06:58.531584 | orchestrator | Wednesday 07 January 2026 01:05:59 +0000 (0:00:00.802) 0:00:42.209 ***** 2026-01-07 01:06:58.531589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531619 | orchestrator | 2026-01-07 01:06:58.531622 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-01-07 01:06:58.531625 | orchestrator | Wednesday 07 January 2026 01:06:02 +0000 (0:00:02.579) 0:00:44.789 ***** 2026-01-07 01:06:58.531631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:06:58.531636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:06:58.531639 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:58.531643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:06:58.531646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:06:58.531649 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:58.531654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:06:58.531660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:06:58.531666 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:58.531669 | orchestrator | 2026-01-07 01:06:58.531672 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-01-07 01:06:58.531675 | orchestrator | Wednesday 07 January 2026 01:06:02 +0000 (0:00:00.771) 0:00:45.560 ***** 2026-01-07 01:06:58.531695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:06:58.531700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:06:58.531703 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:58.531709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:06:58.531715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:06:58.531721 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:58.531725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:06:58.531729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:06:58.531732 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:58.531736 | orchestrator | 2026-01-07 01:06:58.531740 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-01-07 01:06:58.531743 | orchestrator | Wednesday 07 January 2026 01:06:04 +0000 (0:00:01.228) 0:00:46.788 ***** 2026-01-07 01:06:58.531747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531778 | orchestrator | 2026-01-07 01:06:58.531782 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-01-07 01:06:58.531785 | orchestrator | Wednesday 07 January 2026 01:06:06 +0000 (0:00:02.655) 0:00:49.444 ***** 2026-01-07 01:06:58.531791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531823 | orchestrator | 2026-01-07 01:06:58.531827 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-01-07 01:06:58.531833 | orchestrator | Wednesday 07 January 2026 01:06:13 +0000 (0:00:06.793) 0:00:56.238 ***** 2026-01-07 01:06:58.531837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:06:58.531840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:06:58.531844 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:58.531865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:06:58.531871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:06:58.531880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-07 01:06:58.531884 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:58.531888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:06:58.531899 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:58.531903 | orchestrator | 2026-01-07 01:06:58.531907 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-01-07 01:06:58.531911 | orchestrator | Wednesday 07 January 2026 01:06:14 +0000 (0:00:01.201) 0:00:57.439 ***** 2026-01-07 01:06:58.531915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-07 01:06:58.531935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:06:58.531950 | orchestrator | 2026-01-07 01:06:58.531955 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-07 01:06:58.531961 | orchestrator | Wednesday 07 January 2026 01:06:17 +0000 (0:00:03.208) 0:01:00.647 ***** 2026-01-07 01:06:58.531966 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:06:58.531971 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:06:58.531979 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:06:58.531983 | orchestrator | 2026-01-07 01:06:58.531988 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-01-07 01:06:58.531993 | orchestrator | Wednesday 07 January 2026 01:06:18 +0000 (0:00:00.251) 0:01:00.898 ***** 2026-01-07 01:06:58.531997 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:58.532002 | orchestrator | 2026-01-07 01:06:58.532006 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-01-07 01:06:58.532010 | orchestrator | Wednesday 07 January 2026 01:06:20 +0000 (0:00:02.078) 0:01:02.977 ***** 2026-01-07 01:06:58.532015 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:58.532020 | orchestrator | 2026-01-07 01:06:58.532025 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-01-07 01:06:58.532030 | orchestrator | Wednesday 07 January 2026 01:06:22 +0000 (0:00:01.962) 0:01:04.940 ***** 2026-01-07 01:06:58.532035 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:58.532039 | orchestrator | 2026-01-07 01:06:58.532044 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-07 01:06:58.532049 | orchestrator | Wednesday 07 January 2026 01:06:36 +0000 (0:00:13.976) 0:01:18.916 ***** 2026-01-07 01:06:58.532054 | orchestrator | 2026-01-07 01:06:58.532059 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-07 01:06:58.532065 | orchestrator | Wednesday 07 January 2026 01:06:36 +0000 (0:00:00.062) 0:01:18.978 ***** 2026-01-07 01:06:58.532069 | orchestrator | 2026-01-07 01:06:58.532077 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-07 01:06:58.532083 | orchestrator | Wednesday 07 January 2026 01:06:36 +0000 (0:00:00.066) 0:01:19.044 ***** 2026-01-07 01:06:58.532088 | orchestrator | 2026-01-07 01:06:58.532092 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-01-07 01:06:58.532097 | orchestrator | Wednesday 07 January 2026 01:06:36 +0000 (0:00:00.070) 0:01:19.115 ***** 2026-01-07 01:06:58.532102 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:58.532107 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:06:58.532112 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:06:58.532117 | orchestrator | 2026-01-07 01:06:58.532122 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-01-07 01:06:58.532127 | orchestrator | Wednesday 07 January 2026 01:06:48 +0000 (0:00:12.083) 0:01:31.198 ***** 2026-01-07 01:06:58.532132 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:06:58.532137 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:06:58.532142 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:06:58.532148 | orchestrator | 2026-01-07 01:06:58.532158 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:06:58.532163 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-07 01:06:58.532167 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 01:06:58.532170 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 01:06:58.532174 | orchestrator | 2026-01-07 01:06:58.532177 | orchestrator | 2026-01-07 01:06:58.532180 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:06:58.532183 | orchestrator | Wednesday 07 January 2026 01:06:57 +0000 (0:00:08.656) 0:01:39.855 ***** 2026-01-07 01:06:58.532186 | orchestrator | =============================================================================== 2026-01-07 01:06:58.532189 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 13.98s 2026-01-07 01:06:58.532192 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.08s 2026-01-07 01:06:58.532195 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 8.66s 2026-01-07 01:06:58.532202 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.79s 2026-01-07 01:06:58.532205 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.36s 2026-01-07 01:06:58.532208 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.23s 2026-01-07 01:06:58.532211 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.90s 2026-01-07 01:06:58.532214 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.57s 2026-01-07 01:06:58.532217 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.55s 2026-01-07 01:06:58.532220 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.32s 2026-01-07 01:06:58.532223 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.23s 2026-01-07 01:06:58.532227 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.21s 2026-01-07 01:06:58.532230 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.08s 2026-01-07 01:06:58.532233 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 2.90s 2026-01-07 01:06:58.532236 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.66s 2026-01-07 01:06:58.532239 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.58s 2026-01-07 01:06:58.532242 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.33s 2026-01-07 01:06:58.532245 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.08s 2026-01-07 01:06:58.532248 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 1.96s 2026-01-07 01:06:58.532251 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.71s 2026-01-07 01:06:58.532254 | orchestrator | 2026-01-07 01:06:58 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:06:58.532257 | orchestrator | 2026-01-07 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:01.582222 | orchestrator | 2026-01-07 01:07:01 | INFO  | Task d5a8574d-456e-4dc4-ab99-ccd2f125f8f4 is in state SUCCESS 2026-01-07 01:07:01.583805 | orchestrator | 2026-01-07 01:07:01 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:01.585271 | orchestrator | 2026-01-07 01:07:01 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:01.586671 | orchestrator | 2026-01-07 01:07:01 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:01.586696 | orchestrator | 2026-01-07 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:04.627048 | orchestrator | 2026-01-07 01:07:04 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:04.627792 | orchestrator | 2026-01-07 01:07:04 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:04.629623 | orchestrator | 2026-01-07 01:07:04 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:04.631300 | orchestrator | 2026-01-07 01:07:04 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:04.631352 | orchestrator | 2026-01-07 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:07.700072 | orchestrator | 2026-01-07 01:07:07 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:07.700172 | orchestrator | 2026-01-07 01:07:07 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:07.700181 | orchestrator | 2026-01-07 01:07:07 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:07.700187 | orchestrator | 2026-01-07 01:07:07 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:07.700217 | orchestrator | 2026-01-07 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:10.727000 | orchestrator | 2026-01-07 01:07:10 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:10.728888 | orchestrator | 2026-01-07 01:07:10 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:10.730481 | orchestrator | 2026-01-07 01:07:10 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:10.731837 | orchestrator | 2026-01-07 01:07:10 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:10.731889 | orchestrator | 2026-01-07 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:13.781767 | orchestrator | 2026-01-07 01:07:13 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:13.782838 | orchestrator | 2026-01-07 01:07:13 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:13.785717 | orchestrator | 2026-01-07 01:07:13 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:13.787262 | orchestrator | 2026-01-07 01:07:13 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:13.787600 | orchestrator | 2026-01-07 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:16.830194 | orchestrator | 2026-01-07 01:07:16 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:16.831014 | orchestrator | 2026-01-07 01:07:16 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:16.831949 | orchestrator | 2026-01-07 01:07:16 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:16.833692 | orchestrator | 2026-01-07 01:07:16 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:16.833963 | orchestrator | 2026-01-07 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:19.873292 | orchestrator | 2026-01-07 01:07:19 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:19.875390 | orchestrator | 2026-01-07 01:07:19 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:19.877918 | orchestrator | 2026-01-07 01:07:19 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:19.879772 | orchestrator | 2026-01-07 01:07:19 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:19.879803 | orchestrator | 2026-01-07 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:22.923215 | orchestrator | 2026-01-07 01:07:22 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:22.925103 | orchestrator | 2026-01-07 01:07:22 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:22.927371 | orchestrator | 2026-01-07 01:07:22 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:22.929068 | orchestrator | 2026-01-07 01:07:22 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:22.929109 | orchestrator | 2026-01-07 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:25.975908 | orchestrator | 2026-01-07 01:07:25 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:25.977190 | orchestrator | 2026-01-07 01:07:25 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:25.978942 | orchestrator | 2026-01-07 01:07:25 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:25.980712 | orchestrator | 2026-01-07 01:07:25 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:25.980756 | orchestrator | 2026-01-07 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:29.023161 | orchestrator | 2026-01-07 01:07:29 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:29.023791 | orchestrator | 2026-01-07 01:07:29 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:29.024575 | orchestrator | 2026-01-07 01:07:29 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:29.026723 | orchestrator | 2026-01-07 01:07:29 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:29.026789 | orchestrator | 2026-01-07 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:32.064414 | orchestrator | 2026-01-07 01:07:32 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:32.065069 | orchestrator | 2026-01-07 01:07:32 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:32.068084 | orchestrator | 2026-01-07 01:07:32 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:32.070872 | orchestrator | 2026-01-07 01:07:32 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:32.071403 | orchestrator | 2026-01-07 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:35.143953 | orchestrator | 2026-01-07 01:07:35 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:35.144627 | orchestrator | 2026-01-07 01:07:35 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:35.145486 | orchestrator | 2026-01-07 01:07:35 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:35.146164 | orchestrator | 2026-01-07 01:07:35 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:35.146281 | orchestrator | 2026-01-07 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:38.190134 | orchestrator | 2026-01-07 01:07:38 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:38.191421 | orchestrator | 2026-01-07 01:07:38 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:38.192085 | orchestrator | 2026-01-07 01:07:38 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:38.193018 | orchestrator | 2026-01-07 01:07:38 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:38.193035 | orchestrator | 2026-01-07 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:41.247971 | orchestrator | 2026-01-07 01:07:41 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:41.248378 | orchestrator | 2026-01-07 01:07:41 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:41.251248 | orchestrator | 2026-01-07 01:07:41 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:41.251912 | orchestrator | 2026-01-07 01:07:41 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:41.251939 | orchestrator | 2026-01-07 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:44.287885 | orchestrator | 2026-01-07 01:07:44 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:44.288661 | orchestrator | 2026-01-07 01:07:44 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:44.290625 | orchestrator | 2026-01-07 01:07:44 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:44.292074 | orchestrator | 2026-01-07 01:07:44 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:44.292188 | orchestrator | 2026-01-07 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:47.328308 | orchestrator | 2026-01-07 01:07:47 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:47.328570 | orchestrator | 2026-01-07 01:07:47 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:47.329474 | orchestrator | 2026-01-07 01:07:47 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:47.330177 | orchestrator | 2026-01-07 01:07:47 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:47.330197 | orchestrator | 2026-01-07 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:50.368558 | orchestrator | 2026-01-07 01:07:50 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:50.370485 | orchestrator | 2026-01-07 01:07:50 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:50.372312 | orchestrator | 2026-01-07 01:07:50 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:50.374163 | orchestrator | 2026-01-07 01:07:50 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:50.374220 | orchestrator | 2026-01-07 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:53.402536 | orchestrator | 2026-01-07 01:07:53 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:53.404956 | orchestrator | 2026-01-07 01:07:53 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:53.405012 | orchestrator | 2026-01-07 01:07:53 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:53.405099 | orchestrator | 2026-01-07 01:07:53 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:53.405253 | orchestrator | 2026-01-07 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:56.464442 | orchestrator | 2026-01-07 01:07:56 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:56.466691 | orchestrator | 2026-01-07 01:07:56 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:56.468432 | orchestrator | 2026-01-07 01:07:56 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:56.469872 | orchestrator | 2026-01-07 01:07:56 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:56.469914 | orchestrator | 2026-01-07 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:07:59.493750 | orchestrator | 2026-01-07 01:07:59 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:07:59.493912 | orchestrator | 2026-01-07 01:07:59 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:07:59.494833 | orchestrator | 2026-01-07 01:07:59 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:07:59.495271 | orchestrator | 2026-01-07 01:07:59 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:07:59.495329 | orchestrator | 2026-01-07 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:02.521075 | orchestrator | 2026-01-07 01:08:02 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:02.521675 | orchestrator | 2026-01-07 01:08:02 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:02.522665 | orchestrator | 2026-01-07 01:08:02 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:02.523491 | orchestrator | 2026-01-07 01:08:02 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:02.523517 | orchestrator | 2026-01-07 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:05.553029 | orchestrator | 2026-01-07 01:08:05 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:05.555203 | orchestrator | 2026-01-07 01:08:05 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:05.556398 | orchestrator | 2026-01-07 01:08:05 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:05.560361 | orchestrator | 2026-01-07 01:08:05 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:05.560417 | orchestrator | 2026-01-07 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:08.587943 | orchestrator | 2026-01-07 01:08:08 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:08.588002 | orchestrator | 2026-01-07 01:08:08 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:08.588487 | orchestrator | 2026-01-07 01:08:08 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:08.589245 | orchestrator | 2026-01-07 01:08:08 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:08.589276 | orchestrator | 2026-01-07 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:11.652722 | orchestrator | 2026-01-07 01:08:11 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:11.653111 | orchestrator | 2026-01-07 01:08:11 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:11.655477 | orchestrator | 2026-01-07 01:08:11 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:11.658967 | orchestrator | 2026-01-07 01:08:11 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:11.659024 | orchestrator | 2026-01-07 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:14.683451 | orchestrator | 2026-01-07 01:08:14 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:14.683914 | orchestrator | 2026-01-07 01:08:14 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:14.684659 | orchestrator | 2026-01-07 01:08:14 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:14.685349 | orchestrator | 2026-01-07 01:08:14 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:14.685377 | orchestrator | 2026-01-07 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:17.719100 | orchestrator | 2026-01-07 01:08:17 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:17.733418 | orchestrator | 2026-01-07 01:08:17 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:17.742371 | orchestrator | 2026-01-07 01:08:17 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:17.751423 | orchestrator | 2026-01-07 01:08:17 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:17.751487 | orchestrator | 2026-01-07 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:20.846254 | orchestrator | 2026-01-07 01:08:20 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:20.846311 | orchestrator | 2026-01-07 01:08:20 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:20.846320 | orchestrator | 2026-01-07 01:08:20 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:20.846328 | orchestrator | 2026-01-07 01:08:20 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:20.846335 | orchestrator | 2026-01-07 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:23.822506 | orchestrator | 2026-01-07 01:08:23 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:23.823099 | orchestrator | 2026-01-07 01:08:23 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:23.824042 | orchestrator | 2026-01-07 01:08:23 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:23.825131 | orchestrator | 2026-01-07 01:08:23 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:23.825161 | orchestrator | 2026-01-07 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:26.850825 | orchestrator | 2026-01-07 01:08:26 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:26.852923 | orchestrator | 2026-01-07 01:08:26 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:26.854828 | orchestrator | 2026-01-07 01:08:26 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:26.856550 | orchestrator | 2026-01-07 01:08:26 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:26.860193 | orchestrator | 2026-01-07 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:29.892400 | orchestrator | 2026-01-07 01:08:29 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:29.894155 | orchestrator | 2026-01-07 01:08:29 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:29.894826 | orchestrator | 2026-01-07 01:08:29 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:29.895913 | orchestrator | 2026-01-07 01:08:29 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:29.896010 | orchestrator | 2026-01-07 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:32.938392 | orchestrator | 2026-01-07 01:08:32 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:32.939015 | orchestrator | 2026-01-07 01:08:32 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:32.939067 | orchestrator | 2026-01-07 01:08:32 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:32.939375 | orchestrator | 2026-01-07 01:08:32 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:32.939535 | orchestrator | 2026-01-07 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:35.967351 | orchestrator | 2026-01-07 01:08:35 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:35.968279 | orchestrator | 2026-01-07 01:08:35 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:35.968747 | orchestrator | 2026-01-07 01:08:35 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:35.969480 | orchestrator | 2026-01-07 01:08:35 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:35.969504 | orchestrator | 2026-01-07 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:39.003569 | orchestrator | 2026-01-07 01:08:39 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:39.005314 | orchestrator | 2026-01-07 01:08:39 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:39.006228 | orchestrator | 2026-01-07 01:08:39 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:39.007141 | orchestrator | 2026-01-07 01:08:39 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:39.007181 | orchestrator | 2026-01-07 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:42.045775 | orchestrator | 2026-01-07 01:08:42 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:42.049727 | orchestrator | 2026-01-07 01:08:42 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:42.051707 | orchestrator | 2026-01-07 01:08:42 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:42.053561 | orchestrator | 2026-01-07 01:08:42 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:42.053607 | orchestrator | 2026-01-07 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:45.095720 | orchestrator | 2026-01-07 01:08:45 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:45.097805 | orchestrator | 2026-01-07 01:08:45 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:45.099985 | orchestrator | 2026-01-07 01:08:45 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:45.104738 | orchestrator | 2026-01-07 01:08:45 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:45.104778 | orchestrator | 2026-01-07 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:48.148351 | orchestrator | 2026-01-07 01:08:48 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:48.148846 | orchestrator | 2026-01-07 01:08:48 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:48.149709 | orchestrator | 2026-01-07 01:08:48 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:48.150602 | orchestrator | 2026-01-07 01:08:48 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:48.150677 | orchestrator | 2026-01-07 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:51.203501 | orchestrator | 2026-01-07 01:08:51 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:51.206453 | orchestrator | 2026-01-07 01:08:51 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:51.208241 | orchestrator | 2026-01-07 01:08:51 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:51.210227 | orchestrator | 2026-01-07 01:08:51 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:51.210779 | orchestrator | 2026-01-07 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:54.257388 | orchestrator | 2026-01-07 01:08:54 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:54.258408 | orchestrator | 2026-01-07 01:08:54 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state STARTED 2026-01-07 01:08:54.258992 | orchestrator | 2026-01-07 01:08:54 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:54.262142 | orchestrator | 2026-01-07 01:08:54 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:54.262197 | orchestrator | 2026-01-07 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:08:57.314073 | orchestrator | 2026-01-07 01:08:57 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:08:57.316454 | orchestrator | 2026-01-07 01:08:57 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:08:57.321454 | orchestrator | 2026-01-07 01:08:57 | INFO  | Task 2cdbe5f9-6c71-4070-8724-33a94c0ada59 is in state SUCCESS 2026-01-07 01:08:57.325394 | orchestrator | 2026-01-07 01:08:57.325448 | orchestrator | 2026-01-07 01:08:57.325455 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:08:57.325462 | orchestrator | 2026-01-07 01:08:57.325468 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:08:57.325473 | orchestrator | Wednesday 07 January 2026 01:06:30 +0000 (0:00:00.287) 0:00:00.287 ***** 2026-01-07 01:08:57.325479 | orchestrator | ok: [testbed-manager] 2026-01-07 01:08:57.325484 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:08:57.325487 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:08:57.325490 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:08:57.325494 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:08:57.325497 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:08:57.325500 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:08:57.325504 | orchestrator | 2026-01-07 01:08:57.325507 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:08:57.325510 | orchestrator | Wednesday 07 January 2026 01:06:31 +0000 (0:00:00.803) 0:00:01.091 ***** 2026-01-07 01:08:57.325514 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-01-07 01:08:57.325517 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-01-07 01:08:57.325520 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-01-07 01:08:57.325523 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-01-07 01:08:57.325526 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-01-07 01:08:57.325530 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-01-07 01:08:57.325533 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-01-07 01:08:57.325536 | orchestrator | 2026-01-07 01:08:57.325539 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-07 01:08:57.325542 | orchestrator | 2026-01-07 01:08:57.325545 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-01-07 01:08:57.325548 | orchestrator | Wednesday 07 January 2026 01:06:31 +0000 (0:00:00.682) 0:00:01.774 ***** 2026-01-07 01:08:57.325553 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:08:57.325560 | orchestrator | 2026-01-07 01:08:57.325564 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-01-07 01:08:57.325569 | orchestrator | Wednesday 07 January 2026 01:06:33 +0000 (0:00:01.450) 0:00:03.224 ***** 2026-01-07 01:08:57.325574 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-01-07 01:08:57.325580 | orchestrator | 2026-01-07 01:08:57.325585 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-01-07 01:08:57.325607 | orchestrator | Wednesday 07 January 2026 01:06:37 +0000 (0:00:04.035) 0:00:07.260 ***** 2026-01-07 01:08:57.325612 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-01-07 01:08:57.325619 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-01-07 01:08:57.325624 | orchestrator | 2026-01-07 01:08:57.325629 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-01-07 01:08:57.325634 | orchestrator | Wednesday 07 January 2026 01:06:43 +0000 (0:00:06.339) 0:00:13.599 ***** 2026-01-07 01:08:57.325639 | orchestrator | ok: [testbed-manager] => (item=service) 2026-01-07 01:08:57.325644 | orchestrator | 2026-01-07 01:08:57.325650 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-01-07 01:08:57.325655 | orchestrator | Wednesday 07 January 2026 01:06:46 +0000 (0:00:02.572) 0:00:16.172 ***** 2026-01-07 01:08:57.325660 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:08:57.325663 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-01-07 01:08:57.325666 | orchestrator | 2026-01-07 01:08:57.325669 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-01-07 01:08:57.325672 | orchestrator | Wednesday 07 January 2026 01:06:49 +0000 (0:00:03.291) 0:00:19.463 ***** 2026-01-07 01:08:57.325676 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-01-07 01:08:57.325679 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-01-07 01:08:57.325682 | orchestrator | 2026-01-07 01:08:57.325686 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-01-07 01:08:57.325691 | orchestrator | Wednesday 07 January 2026 01:06:55 +0000 (0:00:06.173) 0:00:25.637 ***** 2026-01-07 01:08:57.325704 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-01-07 01:08:57.325710 | orchestrator | 2026-01-07 01:08:57.325715 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:08:57.325720 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:57.325725 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:57.325731 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:57.325736 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:57.325741 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:57.325757 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:57.325761 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:08:57.325764 | orchestrator | 2026-01-07 01:08:57.325767 | orchestrator | 2026-01-07 01:08:57.325770 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:08:57.325773 | orchestrator | Wednesday 07 January 2026 01:07:00 +0000 (0:00:04.451) 0:00:30.089 ***** 2026-01-07 01:08:57.325777 | orchestrator | =============================================================================== 2026-01-07 01:08:57.325780 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.34s 2026-01-07 01:08:57.325783 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.17s 2026-01-07 01:08:57.325786 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.45s 2026-01-07 01:08:57.325793 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.04s 2026-01-07 01:08:57.325796 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.29s 2026-01-07 01:08:57.325799 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.57s 2026-01-07 01:08:57.325802 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.45s 2026-01-07 01:08:57.325805 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.80s 2026-01-07 01:08:57.325808 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2026-01-07 01:08:57.325811 | orchestrator | 2026-01-07 01:08:57.325814 | orchestrator | 2026-01-07 01:08:57.325817 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:08:57.325821 | orchestrator | 2026-01-07 01:08:57.325824 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:08:57.325827 | orchestrator | Wednesday 07 January 2026 01:05:43 +0000 (0:00:00.288) 0:00:00.288 ***** 2026-01-07 01:08:57.325830 | orchestrator | ok: [testbed-manager] 2026-01-07 01:08:57.325833 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:08:57.325836 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:08:57.325841 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:08:57.325846 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:08:57.325850 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:08:57.325913 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:08:57.325920 | orchestrator | 2026-01-07 01:08:57.325925 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:08:57.325930 | orchestrator | Wednesday 07 January 2026 01:05:44 +0000 (0:00:00.925) 0:00:01.213 ***** 2026-01-07 01:08:57.325936 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-07 01:08:57.325940 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-07 01:08:57.325943 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-07 01:08:57.325946 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-07 01:08:57.325950 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-07 01:08:57.325953 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-07 01:08:57.325956 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-07 01:08:57.325959 | orchestrator | 2026-01-07 01:08:57.325962 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-07 01:08:57.325965 | orchestrator | 2026-01-07 01:08:57.325968 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-07 01:08:57.325971 | orchestrator | Wednesday 07 January 2026 01:05:45 +0000 (0:00:00.723) 0:00:01.936 ***** 2026-01-07 01:08:57.325974 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:08:57.325978 | orchestrator | 2026-01-07 01:08:57.325981 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-07 01:08:57.325984 | orchestrator | Wednesday 07 January 2026 01:05:46 +0000 (0:00:01.483) 0:00:03.419 ***** 2026-01-07 01:08:57.325994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.326008 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-07 01:08:57.326041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.326049 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.326053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.326057 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.326064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326069 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.326078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326083 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326090 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326098 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.326108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326115 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326118 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326129 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-07 01:08:57.326135 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326149 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326152 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326192 | orchestrator | 2026-01-07 01:08:57.326197 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-07 01:08:57.326203 | orchestrator | Wednesday 07 January 2026 01:05:49 +0000 (0:00:02.990) 0:00:06.410 ***** 2026-01-07 01:08:57.326212 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:08:57.326217 | orchestrator | 2026-01-07 01:08:57.326222 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-07 01:08:57.326225 | orchestrator | Wednesday 07 January 2026 01:05:51 +0000 (0:00:01.411) 0:00:07.821 ***** 2026-01-07 01:08:57.326231 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.326237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.326240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.326244 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-07 01:08:57.326247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.326251 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.326254 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.326263 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326267 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.326273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326283 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326292 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326295 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326491 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326494 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326516 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-07 01:08:57.326522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.326535 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.326552 | orchestrator | 2026-01-07 01:08:57.326555 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-07 01:08:57.326560 | orchestrator | Wednesday 07 January 2026 01:05:56 +0000 (0:00:05.349) 0:00:13.170 ***** 2026-01-07 01:08:57.326568 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-07 01:08:57.326577 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:08:57.326582 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.326588 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-07 01:08:57.326598 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.326603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:08:57.326610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.326616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:08:57.326624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.326630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.326636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.326645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.326650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.326655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.326663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.326668 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:08:57.326674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:08:57.326683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.326689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.326694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.326703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.326709 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:57.326714 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:57.326720 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:57.326726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:08:57.326734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.326740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.326745 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:57.326754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:08:57.326760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.326767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.326770 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:57.326773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:08:57.326776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.326780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.326783 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:57.326786 | orchestrator | 2026-01-07 01:08:57.326791 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-07 01:08:57.326794 | orchestrator | Wednesday 07 January 2026 01:05:58 +0000 (0:00:01.705) 0:00:14.876 ***** 2026-01-07 01:08:57.326797 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-07 01:08:57.326804 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:08:57.326811 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.326815 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-07 01:08:57.326818 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.326822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:08:57.326826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.326830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.326835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.326841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.326844 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:08:57.326847 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:57.326850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:08:57.326854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.326857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.326862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.326865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.326868 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:57.327221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:08:57.327330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.327334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.327337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.327341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-07 01:08:57.327344 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:57.327347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:08:57.327354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.327357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.327365 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:57.327369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:08:57.327375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.327380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.327388 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:57.327394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-07 01:08:57.327401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.327409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-07 01:08:57.327415 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:57.327420 | orchestrator | 2026-01-07 01:08:57.327425 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-07 01:08:57.327431 | orchestrator | Wednesday 07 January 2026 01:06:00 +0000 (0:00:02.280) 0:00:17.157 ***** 2026-01-07 01:08:57.327440 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-07 01:08:57.327450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.327455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.327461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.327467 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.327473 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.327481 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.327487 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.327494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.327499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.327506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.327513 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.327518 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.327523 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.327531 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.327540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.327549 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-07 01:08:57.327555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.327560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.327565 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.327570 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.327581 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.327587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.327595 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.327827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.327840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.327843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.327847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.327859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.327863 | orchestrator | 2026-01-07 01:08:57.327866 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-07 01:08:57.327870 | orchestrator | Wednesday 07 January 2026 01:06:06 +0000 (0:00:06.525) 0:00:23.682 ***** 2026-01-07 01:08:57.327873 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 01:08:57.327877 | orchestrator | 2026-01-07 01:08:57.327880 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-07 01:08:57.327883 | orchestrator | Wednesday 07 January 2026 01:06:08 +0000 (0:00:01.463) 0:00:25.145 ***** 2026-01-07 01:08:57.327897 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1321104, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2509453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327902 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1321104, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2509453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327905 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1321104, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2509453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327908 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1321125, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2576847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327911 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1321104, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2509453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.327918 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1321125, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2576847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327922 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1321104, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2509453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327932 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1321104, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2509453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327936 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1321104, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2509453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327939 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1321125, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2576847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327943 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1321096, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2496846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327946 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1321096, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2496846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327952 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1321125, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2576847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327957 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1321096, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2496846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327967 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1321125, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2576847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327971 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1321118, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2546847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327974 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1321125, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2576847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327977 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1321125, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2576847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.327981 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1321118, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2546847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327986 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1321118, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2546847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.327991 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1321096, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2496846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328001 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1321091, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2482562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328005 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1321096, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2496846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328008 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1321096, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2496846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328012 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1321118, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2546847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328017 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1321091, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2482562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328020 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1321091, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2482562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328025 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1321105, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2516847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328035 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1321118, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2546847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328039 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1321118, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2546847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328042 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1321091, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2482562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328045 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1321115, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328051 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1321105, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2516847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328054 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1321096, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2496846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.328059 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1321105, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2516847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328062 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1321107, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2526846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328072 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1321091, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2482562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328083 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1321091, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2482562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328086 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1321105, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2516847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328096 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1321099, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2506526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328100 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1321115, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328104 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1321105, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2516847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328108 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1321105, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2516847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328119 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1321115, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328122 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321124, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2566848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328126 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1321115, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328131 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1321115, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328135 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1321115, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328140 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1321107, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2526846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328144 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1321107, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2526846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328156 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1321107, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2526846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328159 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1321118, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2546847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.328176 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1321107, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2526846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328179 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1321099, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2506526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328183 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321089, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2476845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328188 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1321107, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2526846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328191 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1321099, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2506526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328203 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1321099, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2506526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328208 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1321138, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2598908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328217 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1321099, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2506526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328223 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1321099, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2506526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328228 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321124, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2566848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328237 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1321123, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2556846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328242 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321124, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2566848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328270 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321089, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2476845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328277 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321124, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2566848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328286 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321095, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2486846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328292 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321124, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2566848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328298 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321124, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2566848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328304 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1321090, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2480159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328308 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321089, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2476845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328320 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321089, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2476845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328328 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321089, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2476845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328332 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1321138, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2598908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328335 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1321113, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328338 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1321138, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2598908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328343 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1321110, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2533507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328346 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1321138, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2598908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328352 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321089, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2476845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328358 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1321091, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2482562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.328361 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1321138, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2598908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328364 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1321123, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2556846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328367 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1321123, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2556846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328372 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1321135, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2598908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328376 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:57.328379 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1321138, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2598908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328385 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1321123, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2556846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328391 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1321123, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2556846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328394 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321095, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2486846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328397 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321095, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2486846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328400 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321095, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2486846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328405 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1321090, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2480159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328408 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1321090, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2480159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328415 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1321123, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2556846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328418 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1321113, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328421 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321095, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2486846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328425 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1321105, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2516847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.328428 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321095, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2486846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328432 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1321113, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328436 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1321090, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2480159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328444 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1321110, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2533507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328448 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1321113, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328451 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1321090, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2480159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328454 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1321110, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2533507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328457 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1321110, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2533507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328462 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1321135, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2598908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328466 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:57.328470 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1321090, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2480159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328477 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1321113, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328482 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1321135, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2598908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328485 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:57.328489 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1321135, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2598908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328493 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:57.328497 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1321113, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328500 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1321110, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2533507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328506 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1321135, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2598908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328511 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:57.328515 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1321110, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2533507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328521 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1321115, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.328525 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1321135, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2598908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-07 01:08:57.328528 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:57.328532 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1321107, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2526846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.328536 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1321099, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2506526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.328540 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321124, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2566848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.328545 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321089, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2476845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.328551 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1321138, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2598908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.328556 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1321123, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2556846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.328560 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321095, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2486846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.328564 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1321090, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2480159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.328568 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1321113, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.328572 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1321110, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2533507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.328580 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1321135, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2598908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-07 01:08:57.328584 | orchestrator | 2026-01-07 01:08:57.328587 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-07 01:08:57.328591 | orchestrator | Wednesday 07 January 2026 01:06:33 +0000 (0:00:24.970) 0:00:50.116 ***** 2026-01-07 01:08:57.328595 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 01:08:57.328599 | orchestrator | 2026-01-07 01:08:57.328603 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-07 01:08:57.328606 | orchestrator | Wednesday 07 January 2026 01:06:34 +0000 (0:00:00.695) 0:00:50.812 ***** 2026-01-07 01:08:57.328610 | orchestrator | [WARNING]: Skipped 2026-01-07 01:08:57.328614 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:08:57.328618 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-07 01:08:57.328622 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:08:57.328626 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-07 01:08:57.328629 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 01:08:57.328635 | orchestrator | [WARNING]: Skipped 2026-01-07 01:08:57.328639 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:08:57.328642 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-07 01:08:57.328646 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:08:57.328650 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-07 01:08:57.328653 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:08:57.328657 | orchestrator | [WARNING]: Skipped 2026-01-07 01:08:57.328661 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:08:57.328664 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-07 01:08:57.328668 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:08:57.328672 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-07 01:08:57.328675 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-07 01:08:57.328679 | orchestrator | [WARNING]: Skipped 2026-01-07 01:08:57.328683 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:08:57.328686 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-07 01:08:57.328690 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:08:57.328693 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-07 01:08:57.328697 | orchestrator | [WARNING]: Skipped 2026-01-07 01:08:57.328701 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:08:57.328705 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-07 01:08:57.328708 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:08:57.328712 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-07 01:08:57.328716 | orchestrator | [WARNING]: Skipped 2026-01-07 01:08:57.328720 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:08:57.328723 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-07 01:08:57.328727 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:08:57.328733 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-07 01:08:57.328736 | orchestrator | [WARNING]: Skipped 2026-01-07 01:08:57.328740 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:08:57.328743 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-07 01:08:57.328747 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-07 01:08:57.328751 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-07 01:08:57.328755 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-07 01:08:57.328759 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-07 01:08:57.328762 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 01:08:57.328766 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-07 01:08:57.328770 | orchestrator | 2026-01-07 01:08:57.328774 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-07 01:08:57.328778 | orchestrator | Wednesday 07 January 2026 01:06:36 +0000 (0:00:01.971) 0:00:52.783 ***** 2026-01-07 01:08:57.328781 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:08:57.328785 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:57.328789 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:08:57.328793 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:57.328796 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:08:57.328800 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:57.328804 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:08:57.328808 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:57.328811 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:08:57.328815 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:57.328820 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-07 01:08:57.328824 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:57.328828 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-07 01:08:57.328831 | orchestrator | 2026-01-07 01:08:57.328835 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-07 01:08:57.328839 | orchestrator | Wednesday 07 January 2026 01:06:50 +0000 (0:00:14.728) 0:01:07.512 ***** 2026-01-07 01:08:57.328843 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:08:57.328846 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:57.328849 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:08:57.328852 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:57.328855 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:08:57.328858 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:57.328861 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:08:57.328864 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:57.328869 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:08:57.328872 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:57.328875 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-07 01:08:57.328878 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:57.328881 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-07 01:08:57.328884 | orchestrator | 2026-01-07 01:08:57.328887 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-07 01:08:57.328893 | orchestrator | Wednesday 07 January 2026 01:06:54 +0000 (0:00:03.436) 0:01:10.948 ***** 2026-01-07 01:08:57.328896 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:08:57.328900 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:08:57.328903 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:08:57.328906 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:57.328909 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:57.328912 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:57.328915 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-07 01:08:57.328918 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:08:57.328921 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:57.328924 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:08:57.328928 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:57.328931 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-07 01:08:57.328934 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:57.328937 | orchestrator | 2026-01-07 01:08:57.328940 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-07 01:08:57.328943 | orchestrator | Wednesday 07 January 2026 01:06:55 +0000 (0:00:01.764) 0:01:12.712 ***** 2026-01-07 01:08:57.328946 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 01:08:57.328949 | orchestrator | 2026-01-07 01:08:57.328953 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-07 01:08:57.328956 | orchestrator | Wednesday 07 January 2026 01:06:56 +0000 (0:00:00.688) 0:01:13.400 ***** 2026-01-07 01:08:57.328959 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:08:57.328962 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:57.328965 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:57.328968 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:57.328971 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:57.328974 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:57.328977 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:57.328980 | orchestrator | 2026-01-07 01:08:57.328983 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-07 01:08:57.328986 | orchestrator | Wednesday 07 January 2026 01:06:57 +0000 (0:00:00.684) 0:01:14.085 ***** 2026-01-07 01:08:57.328989 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:08:57.328992 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:57.328995 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:57.328998 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:57.329001 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:08:57.329004 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:08:57.329007 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:08:57.329010 | orchestrator | 2026-01-07 01:08:57.329014 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-07 01:08:57.329017 | orchestrator | Wednesday 07 January 2026 01:06:59 +0000 (0:00:02.396) 0:01:16.481 ***** 2026-01-07 01:08:57.329020 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:08:57.329023 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:08:57.329028 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:08:57.329033 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:08:57.329036 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:57.329039 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:57.329042 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:08:57.329045 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:57.329049 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:08:57.329052 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:57.329055 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:08:57.329058 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:57.329061 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-07 01:08:57.329064 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:57.329067 | orchestrator | 2026-01-07 01:08:57.329070 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-07 01:08:57.329073 | orchestrator | Wednesday 07 January 2026 01:07:01 +0000 (0:00:02.060) 0:01:18.541 ***** 2026-01-07 01:08:57.329077 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:08:57.329081 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:57.329084 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:08:57.329088 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:57.329093 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:08:57.329098 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:57.329103 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:08:57.329108 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:57.329113 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:08:57.329118 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:57.329124 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-07 01:08:57.329128 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:57.329131 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-07 01:08:57.329134 | orchestrator | 2026-01-07 01:08:57.329137 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-07 01:08:57.329140 | orchestrator | Wednesday 07 January 2026 01:07:03 +0000 (0:00:01.861) 0:01:20.402 ***** 2026-01-07 01:08:57.329143 | orchestrator | [WARNING]: Skipped 2026-01-07 01:08:57.329146 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-07 01:08:57.329149 | orchestrator | due to this access issue: 2026-01-07 01:08:57.329152 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-07 01:08:57.329155 | orchestrator | not a directory 2026-01-07 01:08:57.329158 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-07 01:08:57.329171 | orchestrator | 2026-01-07 01:08:57.329174 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-07 01:08:57.329177 | orchestrator | Wednesday 07 January 2026 01:07:04 +0000 (0:00:01.205) 0:01:21.608 ***** 2026-01-07 01:08:57.329181 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:08:57.329184 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:57.329187 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:57.329190 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:57.329193 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:57.329198 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:57.329202 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:57.329207 | orchestrator | 2026-01-07 01:08:57.329212 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-07 01:08:57.329216 | orchestrator | Wednesday 07 January 2026 01:07:05 +0000 (0:00:01.031) 0:01:22.639 ***** 2026-01-07 01:08:57.329221 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:08:57.329226 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:08:57.329231 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:08:57.329236 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:08:57.329240 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:08:57.329246 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:08:57.329250 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:08:57.329256 | orchestrator | 2026-01-07 01:08:57.329261 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-01-07 01:08:57.329266 | orchestrator | Wednesday 07 January 2026 01:07:06 +0000 (0:00:00.940) 0:01:23.580 ***** 2026-01-07 01:08:57.329272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.329283 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-07 01:08:57.329292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.329296 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.329300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.329306 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.329309 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.329313 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-07 01:08:57.329318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.329321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.329326 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.329329 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.329333 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.329338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.329341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.329347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.329353 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-07 01:08:57.329357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.329360 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.329365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.329369 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.329372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.329375 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.329380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.329386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.329390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-07 01:08:57.329395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.329398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.329401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-07 01:08:57.329404 | orchestrator | 2026-01-07 01:08:57.329408 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-07 01:08:57.329411 | orchestrator | Wednesday 07 January 2026 01:07:11 +0000 (0:00:04.938) 0:01:28.519 ***** 2026-01-07 01:08:57.329414 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-07 01:08:57.329417 | orchestrator | skipping: [testbed-manager] 2026-01-07 01:08:57.329420 | orchestrator | 2026-01-07 01:08:57.329423 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:08:57.329426 | orchestrator | Wednesday 07 January 2026 01:07:13 +0000 (0:00:01.239) 0:01:29.758 ***** 2026-01-07 01:08:57.329430 | orchestrator | 2026-01-07 01:08:57.329433 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:08:57.329436 | orchestrator | Wednesday 07 January 2026 01:07:13 +0000 (0:00:00.066) 0:01:29.825 ***** 2026-01-07 01:08:57.329439 | orchestrator | 2026-01-07 01:08:57.329442 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:08:57.329445 | orchestrator | Wednesday 07 January 2026 01:07:13 +0000 (0:00:00.076) 0:01:29.901 ***** 2026-01-07 01:08:57.329448 | orchestrator | 2026-01-07 01:08:57.329451 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:08:57.329456 | orchestrator | Wednesday 07 January 2026 01:07:13 +0000 (0:00:00.072) 0:01:29.973 ***** 2026-01-07 01:08:57.329459 | orchestrator | 2026-01-07 01:08:57.329462 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:08:57.329465 | orchestrator | Wednesday 07 January 2026 01:07:13 +0000 (0:00:00.240) 0:01:30.214 ***** 2026-01-07 01:08:57.329468 | orchestrator | 2026-01-07 01:08:57.329471 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:08:57.329475 | orchestrator | Wednesday 07 January 2026 01:07:13 +0000 (0:00:00.063) 0:01:30.278 ***** 2026-01-07 01:08:57.329478 | orchestrator | 2026-01-07 01:08:57.329481 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-07 01:08:57.329484 | orchestrator | Wednesday 07 January 2026 01:07:13 +0000 (0:00:00.061) 0:01:30.340 ***** 2026-01-07 01:08:57.329487 | orchestrator | 2026-01-07 01:08:57.329490 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-07 01:08:57.329493 | orchestrator | Wednesday 07 January 2026 01:07:13 +0000 (0:00:00.084) 0:01:30.425 ***** 2026-01-07 01:08:57.329502 | orchestrator | changed: [testbed-manager] 2026-01-07 01:08:57.329507 | orchestrator | 2026-01-07 01:08:57.329512 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-07 01:08:57.329517 | orchestrator | Wednesday 07 January 2026 01:07:32 +0000 (0:00:18.865) 0:01:49.290 ***** 2026-01-07 01:08:57.329524 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:08:57.329529 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:08:57.329534 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:08:57.329538 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:08:57.329544 | orchestrator | changed: [testbed-manager] 2026-01-07 01:08:57.329549 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:08:57.329554 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:08:57.329560 | orchestrator | 2026-01-07 01:08:57.329564 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-07 01:08:57.329567 | orchestrator | Wednesday 07 January 2026 01:07:47 +0000 (0:00:15.091) 0:02:04.382 ***** 2026-01-07 01:08:57.329571 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:08:57.329574 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:08:57.329577 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:08:57.329580 | orchestrator | 2026-01-07 01:08:57.329583 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-07 01:08:57.329586 | orchestrator | Wednesday 07 January 2026 01:07:53 +0000 (0:00:05.646) 0:02:10.029 ***** 2026-01-07 01:08:57.329590 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:08:57.329593 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:08:57.329596 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:08:57.329599 | orchestrator | 2026-01-07 01:08:57.329602 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-07 01:08:57.329605 | orchestrator | Wednesday 07 January 2026 01:07:58 +0000 (0:00:05.355) 0:02:15.384 ***** 2026-01-07 01:08:57.329609 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:08:57.329612 | orchestrator | changed: [testbed-manager] 2026-01-07 01:08:57.329615 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:08:57.329618 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:08:57.329621 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:08:57.329624 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:08:57.329627 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:08:57.329630 | orchestrator | 2026-01-07 01:08:57.329633 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-07 01:08:57.329636 | orchestrator | Wednesday 07 January 2026 01:08:13 +0000 (0:00:14.741) 0:02:30.126 ***** 2026-01-07 01:08:57.329639 | orchestrator | changed: [testbed-manager] 2026-01-07 01:08:57.329642 | orchestrator | 2026-01-07 01:08:57.329647 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-07 01:08:57.329652 | orchestrator | Wednesday 07 January 2026 01:08:24 +0000 (0:00:11.160) 0:02:41.286 ***** 2026-01-07 01:08:57.329660 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:08:57.329666 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:08:57.329671 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:08:57.329675 | orchestrator | 2026-01-07 01:08:57.329680 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-07 01:08:57.329685 | orchestrator | Wednesday 07 January 2026 01:08:34 +0000 (0:00:09.634) 0:02:50.921 ***** 2026-01-07 01:08:57.329690 | orchestrator | changed: [testbed-manager] 2026-01-07 01:08:57.329694 | orchestrator | 2026-01-07 01:08:57.329698 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-07 01:08:57.329703 | orchestrator | Wednesday 07 January 2026 01:08:44 +0000 (0:00:10.267) 0:03:01.188 ***** 2026-01-07 01:08:57.329708 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:08:57.329713 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:08:57.329718 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:08:57.329724 | orchestrator | 2026-01-07 01:08:57.329730 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:08:57.329740 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-07 01:08:57.329747 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-07 01:08:57.329755 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-07 01:08:57.329760 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-07 01:08:57.329765 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-07 01:08:57.329773 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-07 01:08:57.329778 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-07 01:08:57.329783 | orchestrator | 2026-01-07 01:08:57.329787 | orchestrator | 2026-01-07 01:08:57.329792 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:08:57.329796 | orchestrator | Wednesday 07 January 2026 01:08:54 +0000 (0:00:09.604) 0:03:10.792 ***** 2026-01-07 01:08:57.329801 | orchestrator | =============================================================================== 2026-01-07 01:08:57.329805 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.97s 2026-01-07 01:08:57.329810 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.87s 2026-01-07 01:08:57.329814 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 15.09s 2026-01-07 01:08:57.329819 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.74s 2026-01-07 01:08:57.329823 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.73s 2026-01-07 01:08:57.329832 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 11.16s 2026-01-07 01:08:57.329837 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.27s 2026-01-07 01:08:57.329842 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.63s 2026-01-07 01:08:57.329846 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.60s 2026-01-07 01:08:57.329851 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.53s 2026-01-07 01:08:57.329856 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.65s 2026-01-07 01:08:57.329873 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.36s 2026-01-07 01:08:57.329879 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.35s 2026-01-07 01:08:57.329884 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.94s 2026-01-07 01:08:57.329889 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.44s 2026-01-07 01:08:57.329894 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.99s 2026-01-07 01:08:57.329897 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.40s 2026-01-07 01:08:57.329900 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.28s 2026-01-07 01:08:57.329903 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.06s 2026-01-07 01:08:57.329907 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.97s 2026-01-07 01:08:57.330408 | orchestrator | 2026-01-07 01:08:57 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:08:57.332508 | orchestrator | 2026-01-07 01:08:57 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:08:57.332533 | orchestrator | 2026-01-07 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:00.384484 | orchestrator | 2026-01-07 01:09:00 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state STARTED 2026-01-07 01:09:00.385756 | orchestrator | 2026-01-07 01:09:00 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:00.387671 | orchestrator | 2026-01-07 01:09:00 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:09:00.388803 | orchestrator | 2026-01-07 01:09:00 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:09:00.388849 | orchestrator | 2026-01-07 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:03.443476 | orchestrator | 2026-01-07 01:09:03 | INFO  | Task ba53f951-0e4e-4ff0-ae18-503108de15a2 is in state SUCCESS 2026-01-07 01:09:03.446380 | orchestrator | 2026-01-07 01:09:03 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:03.447718 | orchestrator | 2026-01-07 01:09:03 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:09:03.449537 | orchestrator | 2026-01-07 01:09:03 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:09:03.449570 | orchestrator | 2026-01-07 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:06.504150 | orchestrator | 2026-01-07 01:09:06 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:06.506578 | orchestrator | 2026-01-07 01:09:06 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:06.509273 | orchestrator | 2026-01-07 01:09:06 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:09:06.511868 | orchestrator | 2026-01-07 01:09:06 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:09:06.512593 | orchestrator | 2026-01-07 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:09.554447 | orchestrator | 2026-01-07 01:09:09 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:09.554719 | orchestrator | 2026-01-07 01:09:09 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:09.555475 | orchestrator | 2026-01-07 01:09:09 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:09:09.557407 | orchestrator | 2026-01-07 01:09:09 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:09:09.557441 | orchestrator | 2026-01-07 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:12.620304 | orchestrator | 2026-01-07 01:09:12 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:12.622374 | orchestrator | 2026-01-07 01:09:12 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:12.623819 | orchestrator | 2026-01-07 01:09:12 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:09:12.625741 | orchestrator | 2026-01-07 01:09:12 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:09:12.625829 | orchestrator | 2026-01-07 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:15.670640 | orchestrator | 2026-01-07 01:09:15 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:15.684169 | orchestrator | 2026-01-07 01:09:15 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:15.685614 | orchestrator | 2026-01-07 01:09:15 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:09:15.687212 | orchestrator | 2026-01-07 01:09:15 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:09:15.687357 | orchestrator | 2026-01-07 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:18.734576 | orchestrator | 2026-01-07 01:09:18 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:18.736255 | orchestrator | 2026-01-07 01:09:18 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:18.737761 | orchestrator | 2026-01-07 01:09:18 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:09:18.738647 | orchestrator | 2026-01-07 01:09:18 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:09:18.740984 | orchestrator | 2026-01-07 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:21.782756 | orchestrator | 2026-01-07 01:09:21 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:21.783022 | orchestrator | 2026-01-07 01:09:21 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:21.783944 | orchestrator | 2026-01-07 01:09:21 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:09:21.786807 | orchestrator | 2026-01-07 01:09:21 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:09:21.786856 | orchestrator | 2026-01-07 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:24.821839 | orchestrator | 2026-01-07 01:09:24 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:24.822389 | orchestrator | 2026-01-07 01:09:24 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:24.823353 | orchestrator | 2026-01-07 01:09:24 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:09:24.824482 | orchestrator | 2026-01-07 01:09:24 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:09:24.824503 | orchestrator | 2026-01-07 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:27.863704 | orchestrator | 2026-01-07 01:09:27 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:27.863752 | orchestrator | 2026-01-07 01:09:27 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:27.864710 | orchestrator | 2026-01-07 01:09:27 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:09:27.867131 | orchestrator | 2026-01-07 01:09:27 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:09:27.867173 | orchestrator | 2026-01-07 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:30.939687 | orchestrator | 2026-01-07 01:09:30 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:30.940455 | orchestrator | 2026-01-07 01:09:30 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:30.941620 | orchestrator | 2026-01-07 01:09:30 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:09:30.942947 | orchestrator | 2026-01-07 01:09:30 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:09:30.943384 | orchestrator | 2026-01-07 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:33.992833 | orchestrator | 2026-01-07 01:09:33 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:33.995322 | orchestrator | 2026-01-07 01:09:33 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:33.997403 | orchestrator | 2026-01-07 01:09:34 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:09:34.010863 | orchestrator | 2026-01-07 01:09:34 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:09:34.010972 | orchestrator | 2026-01-07 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:37.050899 | orchestrator | 2026-01-07 01:09:37 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:37.052610 | orchestrator | 2026-01-07 01:09:37 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:37.054514 | orchestrator | 2026-01-07 01:09:37 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:09:37.056421 | orchestrator | 2026-01-07 01:09:37 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:09:37.056493 | orchestrator | 2026-01-07 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:40.102237 | orchestrator | 2026-01-07 01:09:40 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:40.103871 | orchestrator | 2026-01-07 01:09:40 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:40.106963 | orchestrator | 2026-01-07 01:09:40 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:09:40.109981 | orchestrator | 2026-01-07 01:09:40 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:09:40.110245 | orchestrator | 2026-01-07 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:43.152085 | orchestrator | 2026-01-07 01:09:43 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:43.153835 | orchestrator | 2026-01-07 01:09:43 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:43.155491 | orchestrator | 2026-01-07 01:09:43 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state STARTED 2026-01-07 01:09:43.156599 | orchestrator | 2026-01-07 01:09:43 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:09:43.156646 | orchestrator | 2026-01-07 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:46.211650 | orchestrator | 2026-01-07 01:09:46 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:46.213237 | orchestrator | 2026-01-07 01:09:46 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:46.216051 | orchestrator | 2026-01-07 01:09:46 | INFO  | Task 0c3377c6-bfc2-486c-bd41-0294bfcdc5b2 is in state SUCCESS 2026-01-07 01:09:46.218179 | orchestrator | 2026-01-07 01:09:46.218223 | orchestrator | 2026-01-07 01:09:46.218229 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-07 01:09:46.218233 | orchestrator | 2026-01-07 01:09:46.218237 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-07 01:09:46.218242 | orchestrator | Wednesday 07 January 2026 01:02:22 +0000 (0:00:00.112) 0:00:00.112 ***** 2026-01-07 01:09:46.218245 | orchestrator | changed: [localhost] 2026-01-07 01:09:46.218250 | orchestrator | 2026-01-07 01:09:46.218254 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-07 01:09:46.218258 | orchestrator | Wednesday 07 January 2026 01:02:23 +0000 (0:00:01.237) 0:00:01.350 ***** 2026-01-07 01:09:46.218262 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-01-07 01:09:46.218280 | orchestrator | 2026-01-07 01:09:46.218284 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-07 01:09:46.218288 | orchestrator | 2026-01-07 01:09:46.218295 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-07 01:09:46.218300 | orchestrator | 2026-01-07 01:09:46.218310 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-07 01:09:46.218317 | orchestrator | 2026-01-07 01:09:46.218323 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-07 01:09:46.218328 | orchestrator | 2026-01-07 01:09:46.218367 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-07 01:09:46.218373 | orchestrator | 2026-01-07 01:09:46.218380 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-07 01:09:46.218385 | orchestrator | 2026-01-07 01:09:46.218389 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-07 01:09:46.218393 | orchestrator | 2026-01-07 01:09:46.218397 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-07 01:09:46.218422 | orchestrator | changed: [localhost] 2026-01-07 01:09:46.218427 | orchestrator | 2026-01-07 01:09:46.218431 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-01-07 01:09:46.218470 | orchestrator | Wednesday 07 January 2026 01:08:34 +0000 (0:06:11.065) 0:06:12.415 ***** 2026-01-07 01:09:46.218475 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-01-07 01:09:46.218479 | orchestrator | changed: [localhost] 2026-01-07 01:09:46.218482 | orchestrator | 2026-01-07 01:09:46.218486 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:09:46.218490 | orchestrator | 2026-01-07 01:09:46.218494 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:09:46.218498 | orchestrator | Wednesday 07 January 2026 01:09:01 +0000 (0:00:26.275) 0:06:38.691 ***** 2026-01-07 01:09:46.218501 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:09:46.218505 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:09:46.218509 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:09:46.218513 | orchestrator | 2026-01-07 01:09:46.218516 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:09:46.218520 | orchestrator | Wednesday 07 January 2026 01:09:01 +0000 (0:00:00.328) 0:06:39.019 ***** 2026-01-07 01:09:46.218524 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-01-07 01:09:46.218528 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-01-07 01:09:46.218532 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-01-07 01:09:46.218536 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-01-07 01:09:46.218539 | orchestrator | 2026-01-07 01:09:46.218543 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-01-07 01:09:46.218549 | orchestrator | skipping: no hosts matched 2026-01-07 01:09:46.218556 | orchestrator | 2026-01-07 01:09:46.218562 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:09:46.218568 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:09:46.218577 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:09:46.218584 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:09:46.218591 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:09:46.218597 | orchestrator | 2026-01-07 01:09:46.218603 | orchestrator | 2026-01-07 01:09:46.218608 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:09:46.218710 | orchestrator | Wednesday 07 January 2026 01:09:01 +0000 (0:00:00.613) 0:06:39.632 ***** 2026-01-07 01:09:46.218717 | orchestrator | =============================================================================== 2026-01-07 01:09:46.218720 | orchestrator | Download ironic-agent initramfs --------------------------------------- 371.07s 2026-01-07 01:09:46.218724 | orchestrator | Download ironic-agent kernel ------------------------------------------- 26.28s 2026-01-07 01:09:46.218728 | orchestrator | Ensure the destination directory exists --------------------------------- 1.24s 2026-01-07 01:09:46.218732 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-01-07 01:09:46.218736 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-01-07 01:09:46.218740 | orchestrator | 2026-01-07 01:09:46.218743 | orchestrator | 2026-01-07 01:09:46.218747 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:09:46.218751 | orchestrator | 2026-01-07 01:09:46.218754 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:09:46.218758 | orchestrator | Wednesday 07 January 2026 01:07:03 +0000 (0:00:00.263) 0:00:00.263 ***** 2026-01-07 01:09:46.218762 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:09:46.218765 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:09:46.218794 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:09:46.218800 | orchestrator | 2026-01-07 01:09:46.218812 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:09:46.218819 | orchestrator | Wednesday 07 January 2026 01:07:04 +0000 (0:00:00.363) 0:00:00.627 ***** 2026-01-07 01:09:46.218825 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-01-07 01:09:46.218832 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-01-07 01:09:46.218838 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-01-07 01:09:46.218845 | orchestrator | 2026-01-07 01:09:46.218851 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-01-07 01:09:46.218859 | orchestrator | 2026-01-07 01:09:46.218865 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-07 01:09:46.218871 | orchestrator | Wednesday 07 January 2026 01:07:04 +0000 (0:00:00.511) 0:00:01.139 ***** 2026-01-07 01:09:46.218875 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:09:46.218878 | orchestrator | 2026-01-07 01:09:46.218882 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-01-07 01:09:46.218888 | orchestrator | Wednesday 07 January 2026 01:07:05 +0000 (0:00:00.566) 0:00:01.706 ***** 2026-01-07 01:09:46.218899 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-01-07 01:09:46.218905 | orchestrator | 2026-01-07 01:09:46.218911 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-01-07 01:09:46.218918 | orchestrator | Wednesday 07 January 2026 01:07:09 +0000 (0:00:04.405) 0:00:06.111 ***** 2026-01-07 01:09:46.218923 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-01-07 01:09:46.218929 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-01-07 01:09:46.218935 | orchestrator | 2026-01-07 01:09:46.218942 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-01-07 01:09:46.218948 | orchestrator | Wednesday 07 January 2026 01:07:16 +0000 (0:00:06.802) 0:00:12.914 ***** 2026-01-07 01:09:46.218955 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:09:46.218962 | orchestrator | 2026-01-07 01:09:46.218968 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-01-07 01:09:46.218974 | orchestrator | Wednesday 07 January 2026 01:07:19 +0000 (0:00:02.718) 0:00:15.632 ***** 2026-01-07 01:09:46.218978 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:09:46.218982 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-01-07 01:09:46.218986 | orchestrator | 2026-01-07 01:09:46.218990 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-01-07 01:09:46.218999 | orchestrator | Wednesday 07 January 2026 01:07:22 +0000 (0:00:03.122) 0:00:18.755 ***** 2026-01-07 01:09:46.219003 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:09:46.219006 | orchestrator | 2026-01-07 01:09:46.219010 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-01-07 01:09:46.219014 | orchestrator | Wednesday 07 January 2026 01:07:25 +0000 (0:00:03.300) 0:00:22.055 ***** 2026-01-07 01:09:46.219018 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-01-07 01:09:46.219021 | orchestrator | 2026-01-07 01:09:46.219025 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-01-07 01:09:46.219029 | orchestrator | Wednesday 07 January 2026 01:07:28 +0000 (0:00:03.220) 0:00:25.275 ***** 2026-01-07 01:09:46.219035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:09:46.219051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:09:46.219060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:09:46.219064 | orchestrator | 2026-01-07 01:09:46.219068 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-07 01:09:46.219072 | orchestrator | Wednesday 07 January 2026 01:07:32 +0000 (0:00:03.602) 0:00:28.878 ***** 2026-01-07 01:09:46.219076 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:09:46.219088 | orchestrator | 2026-01-07 01:09:46.219096 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-01-07 01:09:46.219100 | orchestrator | Wednesday 07 January 2026 01:07:34 +0000 (0:00:01.838) 0:00:30.716 ***** 2026-01-07 01:09:46.219104 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:09:46.219108 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:46.219111 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:09:46.219115 | orchestrator | 2026-01-07 01:09:46.219119 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-01-07 01:09:46.219123 | orchestrator | Wednesday 07 January 2026 01:07:40 +0000 (0:00:06.543) 0:00:37.259 ***** 2026-01-07 01:09:46.219129 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:09:46.219133 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:09:46.219137 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:09:46.219141 | orchestrator | 2026-01-07 01:09:46.219145 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-01-07 01:09:46.219148 | orchestrator | Wednesday 07 January 2026 01:07:42 +0000 (0:00:01.468) 0:00:38.728 ***** 2026-01-07 01:09:46.219152 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:09:46.219156 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:09:46.219160 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:09:46.219166 | orchestrator | 2026-01-07 01:09:46.219172 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-01-07 01:09:46.219176 | orchestrator | Wednesday 07 January 2026 01:07:43 +0000 (0:00:01.147) 0:00:39.876 ***** 2026-01-07 01:09:46.219180 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:09:46.219184 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:09:46.219188 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:09:46.219191 | orchestrator | 2026-01-07 01:09:46.219195 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-01-07 01:09:46.219199 | orchestrator | Wednesday 07 January 2026 01:07:44 +0000 (0:00:00.632) 0:00:40.509 ***** 2026-01-07 01:09:46.219202 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:46.219206 | orchestrator | 2026-01-07 01:09:46.219210 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-01-07 01:09:46.219214 | orchestrator | Wednesday 07 January 2026 01:07:44 +0000 (0:00:00.211) 0:00:40.720 ***** 2026-01-07 01:09:46.219217 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:46.219221 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:46.219225 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:46.219228 | orchestrator | 2026-01-07 01:09:46.219232 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-07 01:09:46.219236 | orchestrator | Wednesday 07 January 2026 01:07:44 +0000 (0:00:00.259) 0:00:40.980 ***** 2026-01-07 01:09:46.219240 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:09:46.219243 | orchestrator | 2026-01-07 01:09:46.219247 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-01-07 01:09:46.219251 | orchestrator | Wednesday 07 January 2026 01:07:45 +0000 (0:00:00.532) 0:00:41.513 ***** 2026-01-07 01:09:46.219255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:09:46.219265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:09:46.219273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:09:46.219277 | orchestrator | 2026-01-07 01:09:46.219281 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-01-07 01:09:46.219285 | orchestrator | Wednesday 07 January 2026 01:07:50 +0000 (0:00:05.538) 0:00:47.051 ***** 2026-01-07 01:09:46.219292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:09:46.219302 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:46.219308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:09:46.219312 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:46.219319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:09:46.219326 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:46.219329 | orchestrator | 2026-01-07 01:09:46.219333 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-01-07 01:09:46.219337 | orchestrator | Wednesday 07 January 2026 01:07:54 +0000 (0:00:03.478) 0:00:50.530 ***** 2026-01-07 01:09:46.219343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:09:46.219347 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:46.219351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:09:46.219358 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:46.219367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-07 01:09:46.219372 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:46.219375 | orchestrator | 2026-01-07 01:09:46.219379 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-01-07 01:09:46.219383 | orchestrator | Wednesday 07 January 2026 01:07:58 +0000 (0:00:03.800) 0:00:54.330 ***** 2026-01-07 01:09:46.219387 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:46.219391 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:46.219394 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:46.219398 | orchestrator | 2026-01-07 01:09:46.219418 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-01-07 01:09:46.219423 | orchestrator | Wednesday 07 January 2026 01:08:04 +0000 (0:00:06.259) 0:01:00.589 ***** 2026-01-07 01:09:46.219427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:09:46.219439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:09:46.219445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:09:46.219450 | orchestrator | 2026-01-07 01:09:46.219455 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-01-07 01:09:46.219460 | orchestrator | Wednesday 07 January 2026 01:08:09 +0000 (0:00:05.607) 0:01:06.197 ***** 2026-01-07 01:09:46.219464 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:09:46.219472 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:46.219479 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:09:46.219484 | orchestrator | 2026-01-07 01:09:46.219489 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-01-07 01:09:46.219493 | orchestrator | Wednesday 07 January 2026 01:08:16 +0000 (0:00:06.386) 0:01:12.583 ***** 2026-01-07 01:09:46.219497 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:46.219502 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:46.219506 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:46.219510 | orchestrator | 2026-01-07 01:09:46.219515 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-01-07 01:09:46.219519 | orchestrator | Wednesday 07 January 2026 01:08:20 +0000 (0:00:03.766) 0:01:16.350 ***** 2026-01-07 01:09:46.219523 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:46.219528 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:46.219534 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:46.219540 | orchestrator | 2026-01-07 01:09:46.219546 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-01-07 01:09:46.219555 | orchestrator | Wednesday 07 January 2026 01:08:23 +0000 (0:00:03.922) 0:01:20.272 ***** 2026-01-07 01:09:46.219565 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:46.219571 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:46.219576 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:46.219582 | orchestrator | 2026-01-07 01:09:46.219592 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-01-07 01:09:46.219598 | orchestrator | Wednesday 07 January 2026 01:08:27 +0000 (0:00:03.814) 0:01:24.087 ***** 2026-01-07 01:09:46.219605 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:46.219611 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:46.219616 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:46.219622 | orchestrator | 2026-01-07 01:09:46.219628 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-01-07 01:09:46.219635 | orchestrator | Wednesday 07 January 2026 01:08:30 +0000 (0:00:02.767) 0:01:26.854 ***** 2026-01-07 01:09:46.219641 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:46.219647 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:46.219653 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:46.219659 | orchestrator | 2026-01-07 01:09:46.219665 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-01-07 01:09:46.219671 | orchestrator | Wednesday 07 January 2026 01:08:30 +0000 (0:00:00.281) 0:01:27.136 ***** 2026-01-07 01:09:46.219678 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-07 01:09:46.219684 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:46.219693 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-07 01:09:46.219701 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:46.219707 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-07 01:09:46.219714 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:46.219721 | orchestrator | 2026-01-07 01:09:46.219727 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-01-07 01:09:46.219733 | orchestrator | Wednesday 07 January 2026 01:08:33 +0000 (0:00:02.965) 0:01:30.101 ***** 2026-01-07 01:09:46.219737 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:09:46.219742 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:09:46.219747 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:46.219751 | orchestrator | 2026-01-07 01:09:46.219756 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-01-07 01:09:46.219760 | orchestrator | Wednesday 07 January 2026 01:08:38 +0000 (0:00:04.332) 0:01:34.433 ***** 2026-01-07 01:09:46.219766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:09:46.219779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:09:46.219787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-07 01:09:46.219794 | orchestrator | 2026-01-07 01:09:46.219799 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-07 01:09:46.219804 | orchestrator | Wednesday 07 January 2026 01:08:41 +0000 (0:00:03.087) 0:01:37.521 ***** 2026-01-07 01:09:46.219808 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:46.219815 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:46.219822 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:46.219832 | orchestrator | 2026-01-07 01:09:46.219838 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-01-07 01:09:46.219844 | orchestrator | Wednesday 07 January 2026 01:08:41 +0000 (0:00:00.272) 0:01:37.793 ***** 2026-01-07 01:09:46.219849 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:46.219856 | orchestrator | 2026-01-07 01:09:46.219862 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-01-07 01:09:46.219869 | orchestrator | Wednesday 07 January 2026 01:08:43 +0000 (0:00:01.947) 0:01:39.741 ***** 2026-01-07 01:09:46.219875 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:46.219881 | orchestrator | 2026-01-07 01:09:46.219885 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-01-07 01:09:46.219888 | orchestrator | Wednesday 07 January 2026 01:08:45 +0000 (0:00:02.082) 0:01:41.823 ***** 2026-01-07 01:09:46.219892 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:46.219896 | orchestrator | 2026-01-07 01:09:46.219900 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-01-07 01:09:46.219903 | orchestrator | Wednesday 07 January 2026 01:08:47 +0000 (0:00:01.844) 0:01:43.668 ***** 2026-01-07 01:09:46.219907 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:46.219911 | orchestrator | 2026-01-07 01:09:46.219915 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-01-07 01:09:46.219918 | orchestrator | Wednesday 07 January 2026 01:09:13 +0000 (0:00:25.856) 0:02:09.525 ***** 2026-01-07 01:09:46.219922 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:46.219926 | orchestrator | 2026-01-07 01:09:46.219929 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-07 01:09:46.219933 | orchestrator | Wednesday 07 January 2026 01:09:15 +0000 (0:00:02.581) 0:02:12.106 ***** 2026-01-07 01:09:46.219937 | orchestrator | 2026-01-07 01:09:46.219941 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-07 01:09:46.219947 | orchestrator | Wednesday 07 January 2026 01:09:16 +0000 (0:00:00.359) 0:02:12.466 ***** 2026-01-07 01:09:46.219951 | orchestrator | 2026-01-07 01:09:46.219955 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-07 01:09:46.219959 | orchestrator | Wednesday 07 January 2026 01:09:16 +0000 (0:00:00.068) 0:02:12.534 ***** 2026-01-07 01:09:46.219963 | orchestrator | 2026-01-07 01:09:46.219967 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-01-07 01:09:46.219971 | orchestrator | Wednesday 07 January 2026 01:09:16 +0000 (0:00:00.070) 0:02:12.604 ***** 2026-01-07 01:09:46.219974 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:46.219978 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:09:46.219986 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:09:46.219990 | orchestrator | 2026-01-07 01:09:46.219994 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:09:46.219998 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-07 01:09:46.220002 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-07 01:09:46.220008 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-07 01:09:46.220012 | orchestrator | 2026-01-07 01:09:46.220016 | orchestrator | 2026-01-07 01:09:46.220020 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:09:46.220024 | orchestrator | Wednesday 07 January 2026 01:09:45 +0000 (0:00:28.715) 0:02:41.320 ***** 2026-01-07 01:09:46.220027 | orchestrator | =============================================================================== 2026-01-07 01:09:46.220031 | orchestrator | glance : Restart glance-api container ---------------------------------- 28.72s 2026-01-07 01:09:46.220035 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.86s 2026-01-07 01:09:46.220041 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.80s 2026-01-07 01:09:46.220047 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 6.54s 2026-01-07 01:09:46.220056 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.39s 2026-01-07 01:09:46.220062 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 6.26s 2026-01-07 01:09:46.220068 | orchestrator | glance : Copying over config.json files for services -------------------- 5.61s 2026-01-07 01:09:46.220075 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.54s 2026-01-07 01:09:46.220081 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.41s 2026-01-07 01:09:46.220088 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.33s 2026-01-07 01:09:46.220095 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.92s 2026-01-07 01:09:46.220101 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.81s 2026-01-07 01:09:46.220107 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.80s 2026-01-07 01:09:46.220111 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.77s 2026-01-07 01:09:46.220114 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.60s 2026-01-07 01:09:46.220118 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.48s 2026-01-07 01:09:46.220122 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.30s 2026-01-07 01:09:46.220126 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.22s 2026-01-07 01:09:46.220129 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.12s 2026-01-07 01:09:46.220133 | orchestrator | glance : Check glance containers ---------------------------------------- 3.09s 2026-01-07 01:09:46.220137 | orchestrator | 2026-01-07 01:09:46 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state STARTED 2026-01-07 01:09:46.220141 | orchestrator | 2026-01-07 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:49.274719 | orchestrator | 2026-01-07 01:09:49 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:09:49.276138 | orchestrator | 2026-01-07 01:09:49 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:49.278739 | orchestrator | 2026-01-07 01:09:49 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:49.282822 | orchestrator | 2026-01-07 01:09:49 | INFO  | Task 0869386e-47c3-4ab9-995b-825bc4ac8adb is in state SUCCESS 2026-01-07 01:09:49.284666 | orchestrator | 2026-01-07 01:09:49.284708 | orchestrator | 2026-01-07 01:09:49.284713 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:09:49.284719 | orchestrator | 2026-01-07 01:09:49.284723 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:09:49.284728 | orchestrator | Wednesday 07 January 2026 01:07:05 +0000 (0:00:00.263) 0:00:00.263 ***** 2026-01-07 01:09:49.284731 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:09:49.284736 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:09:49.284740 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:09:49.284745 | orchestrator | 2026-01-07 01:09:49.284752 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:09:49.284758 | orchestrator | Wednesday 07 January 2026 01:07:06 +0000 (0:00:00.296) 0:00:00.559 ***** 2026-01-07 01:09:49.284764 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-01-07 01:09:49.284771 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-01-07 01:09:49.284777 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-01-07 01:09:49.284830 | orchestrator | 2026-01-07 01:09:49.284837 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-01-07 01:09:49.284841 | orchestrator | 2026-01-07 01:09:49.284845 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-07 01:09:49.284868 | orchestrator | Wednesday 07 January 2026 01:07:06 +0000 (0:00:00.445) 0:00:01.005 ***** 2026-01-07 01:09:49.284873 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:09:49.284877 | orchestrator | 2026-01-07 01:09:49.284881 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-01-07 01:09:49.284885 | orchestrator | Wednesday 07 January 2026 01:07:07 +0000 (0:00:00.784) 0:00:01.789 ***** 2026-01-07 01:09:49.284889 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-01-07 01:09:49.284893 | orchestrator | 2026-01-07 01:09:49.284897 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-01-07 01:09:49.284908 | orchestrator | Wednesday 07 January 2026 01:07:12 +0000 (0:00:04.625) 0:00:06.415 ***** 2026-01-07 01:09:49.284912 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-01-07 01:09:49.284916 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-01-07 01:09:49.284920 | orchestrator | 2026-01-07 01:09:49.284924 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-01-07 01:09:49.284928 | orchestrator | Wednesday 07 January 2026 01:07:17 +0000 (0:00:05.874) 0:00:12.289 ***** 2026-01-07 01:09:49.284931 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:09:49.284935 | orchestrator | 2026-01-07 01:09:49.284939 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-01-07 01:09:49.284943 | orchestrator | Wednesday 07 January 2026 01:07:20 +0000 (0:00:02.725) 0:00:15.014 ***** 2026-01-07 01:09:49.284956 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:09:49.284960 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-01-07 01:09:49.284964 | orchestrator | 2026-01-07 01:09:49.284968 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-01-07 01:09:49.284971 | orchestrator | Wednesday 07 January 2026 01:07:24 +0000 (0:00:03.399) 0:00:18.413 ***** 2026-01-07 01:09:49.284975 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:09:49.284979 | orchestrator | 2026-01-07 01:09:49.284987 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-01-07 01:09:49.284991 | orchestrator | Wednesday 07 January 2026 01:07:27 +0000 (0:00:03.102) 0:00:21.516 ***** 2026-01-07 01:09:49.284994 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-01-07 01:09:49.285009 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-01-07 01:09:49.285015 | orchestrator | 2026-01-07 01:09:49.285023 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-01-07 01:09:49.285032 | orchestrator | Wednesday 07 January 2026 01:07:34 +0000 (0:00:07.392) 0:00:28.909 ***** 2026-01-07 01:09:49.285041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:09:49.285211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.285340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:09:49.285357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.285365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.285378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:09:49.285405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.285414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.285457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.285464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.285470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.285483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.285490 | orchestrator | 2026-01-07 01:09:49.285497 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-07 01:09:49.285504 | orchestrator | Wednesday 07 January 2026 01:07:38 +0000 (0:00:04.097) 0:00:33.006 ***** 2026-01-07 01:09:49.285510 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:49.285515 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:49.285519 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:49.285523 | orchestrator | 2026-01-07 01:09:49.285527 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-07 01:09:49.285531 | orchestrator | Wednesday 07 January 2026 01:07:39 +0000 (0:00:00.592) 0:00:33.599 ***** 2026-01-07 01:09:49.285534 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:09:49.285538 | orchestrator | 2026-01-07 01:09:49.285558 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-01-07 01:09:49.285563 | orchestrator | Wednesday 07 January 2026 01:07:40 +0000 (0:00:01.134) 0:00:34.733 ***** 2026-01-07 01:09:49.285569 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-01-07 01:09:49.285576 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-01-07 01:09:49.285580 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-01-07 01:09:49.285583 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-01-07 01:09:49.285587 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-01-07 01:09:49.285591 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-01-07 01:09:49.285594 | orchestrator | 2026-01-07 01:09:49.285598 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-01-07 01:09:49.285602 | orchestrator | Wednesday 07 January 2026 01:07:42 +0000 (0:00:01.629) 0:00:36.363 ***** 2026-01-07 01:09:49.285612 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-07 01:09:49.285625 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-07 01:09:49.285633 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-07 01:09:49.285639 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-07 01:09:49.285663 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-07 01:09:49.285675 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-07 01:09:49.285687 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-07 01:09:49.285696 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-07 01:09:49.285703 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-07 01:09:49.285728 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-07 01:09:49.285736 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-07 01:09:49.285751 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-07 01:09:49.285758 | orchestrator | 2026-01-07 01:09:49.285763 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-01-07 01:09:49.285767 | orchestrator | Wednesday 07 January 2026 01:07:45 +0000 (0:00:03.108) 0:00:39.472 ***** 2026-01-07 01:09:49.285771 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:09:49.285776 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:09:49.285780 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-07 01:09:49.285784 | orchestrator | 2026-01-07 01:09:49.285787 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-01-07 01:09:49.285791 | orchestrator | Wednesday 07 January 2026 01:07:47 +0000 (0:00:01.993) 0:00:41.465 ***** 2026-01-07 01:09:49.285795 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-01-07 01:09:49.285799 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-01-07 01:09:49.285803 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-01-07 01:09:49.285806 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 01:09:49.285810 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 01:09:49.285814 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-01-07 01:09:49.285817 | orchestrator | 2026-01-07 01:09:49.285821 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-01-07 01:09:49.285832 | orchestrator | Wednesday 07 January 2026 01:07:50 +0000 (0:00:03.446) 0:00:44.911 ***** 2026-01-07 01:09:49.285836 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-01-07 01:09:49.285844 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-01-07 01:09:49.285848 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-01-07 01:09:49.285852 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-01-07 01:09:49.285856 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-01-07 01:09:49.285859 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-01-07 01:09:49.285863 | orchestrator | 2026-01-07 01:09:49.285867 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-01-07 01:09:49.285870 | orchestrator | Wednesday 07 January 2026 01:07:51 +0000 (0:00:01.059) 0:00:45.971 ***** 2026-01-07 01:09:49.285874 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:49.285878 | orchestrator | 2026-01-07 01:09:49.285882 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-01-07 01:09:49.285885 | orchestrator | Wednesday 07 January 2026 01:07:51 +0000 (0:00:00.107) 0:00:46.078 ***** 2026-01-07 01:09:49.285889 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:49.285893 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:49.285910 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:49.285915 | orchestrator | 2026-01-07 01:09:49.285919 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-07 01:09:49.285923 | orchestrator | Wednesday 07 January 2026 01:07:52 +0000 (0:00:00.281) 0:00:46.360 ***** 2026-01-07 01:09:49.285929 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:09:49.285933 | orchestrator | 2026-01-07 01:09:49.285937 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-01-07 01:09:49.285941 | orchestrator | Wednesday 07 January 2026 01:07:52 +0000 (0:00:00.655) 0:00:47.015 ***** 2026-01-07 01:09:49.285948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:09:49.285960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:09:49.285969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:09:49.285977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286085 | orchestrator | 2026-01-07 01:09:49.286089 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-01-07 01:09:49.286093 | orchestrator | Wednesday 07 January 2026 01:07:56 +0000 (0:00:04.001) 0:00:51.017 ***** 2026-01-07 01:09:49.286099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:09:49.286103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286121 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:49.286125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:09:49.286131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286142 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:49.286146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:09:49.286156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286171 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:49.286174 | orchestrator | 2026-01-07 01:09:49.286178 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-01-07 01:09:49.286182 | orchestrator | Wednesday 07 January 2026 01:07:57 +0000 (0:00:00.844) 0:00:51.862 ***** 2026-01-07 01:09:49.286186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:09:49.286192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286208 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:49.286215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:09:49.286219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286235 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:49.286239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:09:49.286245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286259 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:49.286263 | orchestrator | 2026-01-07 01:09:49.286267 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-01-07 01:09:49.286271 | orchestrator | Wednesday 07 January 2026 01:07:58 +0000 (0:00:00.968) 0:00:52.830 ***** 2026-01-07 01:09:49.286275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:09:49.286281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:09:49.286287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:09:49.286291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286338 | orchestrator | 2026-01-07 01:09:49.286342 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-01-07 01:09:49.286346 | orchestrator | Wednesday 07 January 2026 01:08:03 +0000 (0:00:04.728) 0:00:57.559 ***** 2026-01-07 01:09:49.286350 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-07 01:09:49.286358 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-07 01:09:49.286365 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-07 01:09:49.286369 | orchestrator | 2026-01-07 01:09:49.286373 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-01-07 01:09:49.286377 | orchestrator | Wednesday 07 January 2026 01:08:05 +0000 (0:00:01.850) 0:00:59.409 ***** 2026-01-07 01:09:49.286380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:09:49.286385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:09:49.286391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:09:49.286395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286484 | orchestrator | 2026-01-07 01:09:49.286488 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-01-07 01:09:49.286492 | orchestrator | Wednesday 07 January 2026 01:08:17 +0000 (0:00:12.477) 0:01:11.888 ***** 2026-01-07 01:09:49.286497 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:49.286504 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:09:49.286515 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:09:49.286521 | orchestrator | 2026-01-07 01:09:49.286527 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-01-07 01:09:49.286536 | orchestrator | Wednesday 07 January 2026 01:08:19 +0000 (0:00:02.331) 0:01:14.220 ***** 2026-01-07 01:09:49.286546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:09:49.286553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:09:49.286578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286597 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:49.286601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286614 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:49.286623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-07 01:09:49.286630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-07 01:09:49.286657 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:49.286663 | orchestrator | 2026-01-07 01:09:49.286668 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-01-07 01:09:49.286672 | orchestrator | Wednesday 07 January 2026 01:08:20 +0000 (0:00:00.975) 0:01:15.195 ***** 2026-01-07 01:09:49.286676 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:49.286679 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:49.286683 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:49.286687 | orchestrator | 2026-01-07 01:09:49.286691 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-01-07 01:09:49.286694 | orchestrator | Wednesday 07 January 2026 01:08:21 +0000 (0:00:00.346) 0:01:15.541 ***** 2026-01-07 01:09:49.286698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:09:49.286705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:09:49.286710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-07 01:09:49.286718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-07 01:09:49.286768 | orchestrator | 2026-01-07 01:09:49.286772 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-07 01:09:49.286775 | orchestrator | Wednesday 07 January 2026 01:08:24 +0000 (0:00:03.267) 0:01:18.808 ***** 2026-01-07 01:09:49.286779 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:49.286783 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:09:49.286787 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:09:49.286791 | orchestrator | 2026-01-07 01:09:49.286794 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-01-07 01:09:49.286798 | orchestrator | Wednesday 07 January 2026 01:08:25 +0000 (0:00:00.899) 0:01:19.708 ***** 2026-01-07 01:09:49.286802 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:49.286806 | orchestrator | 2026-01-07 01:09:49.286809 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-01-07 01:09:49.286813 | orchestrator | Wednesday 07 January 2026 01:08:27 +0000 (0:00:02.148) 0:01:21.857 ***** 2026-01-07 01:09:49.286817 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:49.286821 | orchestrator | 2026-01-07 01:09:49.286828 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-01-07 01:09:49.286833 | orchestrator | Wednesday 07 January 2026 01:08:29 +0000 (0:00:01.932) 0:01:23.790 ***** 2026-01-07 01:09:49.286837 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:49.286841 | orchestrator | 2026-01-07 01:09:49.286845 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-07 01:09:49.286849 | orchestrator | Wednesday 07 January 2026 01:08:46 +0000 (0:00:17.368) 0:01:41.158 ***** 2026-01-07 01:09:49.286853 | orchestrator | 2026-01-07 01:09:49.286856 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-07 01:09:49.286860 | orchestrator | Wednesday 07 January 2026 01:08:46 +0000 (0:00:00.066) 0:01:41.225 ***** 2026-01-07 01:09:49.286864 | orchestrator | 2026-01-07 01:09:49.286868 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-07 01:09:49.286871 | orchestrator | Wednesday 07 January 2026 01:08:46 +0000 (0:00:00.066) 0:01:41.292 ***** 2026-01-07 01:09:49.286875 | orchestrator | 2026-01-07 01:09:49.286879 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-01-07 01:09:49.286882 | orchestrator | Wednesday 07 January 2026 01:08:47 +0000 (0:00:00.066) 0:01:41.358 ***** 2026-01-07 01:09:49.286886 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:49.286890 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:09:49.286894 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:09:49.286897 | orchestrator | 2026-01-07 01:09:49.286901 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-01-07 01:09:49.286905 | orchestrator | Wednesday 07 January 2026 01:09:09 +0000 (0:00:22.125) 0:02:03.484 ***** 2026-01-07 01:09:49.286909 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:49.286912 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:09:49.286916 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:09:49.286920 | orchestrator | 2026-01-07 01:09:49.286923 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-01-07 01:09:49.286927 | orchestrator | Wednesday 07 January 2026 01:09:14 +0000 (0:00:05.845) 0:02:09.329 ***** 2026-01-07 01:09:49.286931 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:49.286935 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:09:49.286941 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:09:49.286945 | orchestrator | 2026-01-07 01:09:49.286949 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-01-07 01:09:49.286952 | orchestrator | Wednesday 07 January 2026 01:09:36 +0000 (0:00:21.674) 0:02:31.003 ***** 2026-01-07 01:09:49.286956 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:09:49.286960 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:09:49.286964 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:09:49.286967 | orchestrator | 2026-01-07 01:09:49.286971 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-01-07 01:09:49.286975 | orchestrator | Wednesday 07 January 2026 01:09:48 +0000 (0:00:11.442) 0:02:42.445 ***** 2026-01-07 01:09:49.286978 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:09:49.286982 | orchestrator | 2026-01-07 01:09:49.286986 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:09:49.286990 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-07 01:09:49.286994 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:09:49.286998 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:09:49.287002 | orchestrator | 2026-01-07 01:09:49.287006 | orchestrator | 2026-01-07 01:09:49.287010 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:09:49.287013 | orchestrator | Wednesday 07 January 2026 01:09:48 +0000 (0:00:00.260) 0:02:42.706 ***** 2026-01-07 01:09:49.287019 | orchestrator | =============================================================================== 2026-01-07 01:09:49.287023 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.13s 2026-01-07 01:09:49.287027 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 21.67s 2026-01-07 01:09:49.287031 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.37s 2026-01-07 01:09:49.287034 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.48s 2026-01-07 01:09:49.287038 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.44s 2026-01-07 01:09:49.287042 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.39s 2026-01-07 01:09:49.287046 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.87s 2026-01-07 01:09:49.287049 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.85s 2026-01-07 01:09:49.287053 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.73s 2026-01-07 01:09:49.287057 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.63s 2026-01-07 01:09:49.287061 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 4.10s 2026-01-07 01:09:49.287064 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.00s 2026-01-07 01:09:49.287068 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.45s 2026-01-07 01:09:49.287072 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.40s 2026-01-07 01:09:49.287076 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.27s 2026-01-07 01:09:49.287079 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.11s 2026-01-07 01:09:49.287083 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.10s 2026-01-07 01:09:49.287089 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.73s 2026-01-07 01:09:49.287093 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.33s 2026-01-07 01:09:49.287097 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.15s 2026-01-07 01:09:49.287100 | orchestrator | 2026-01-07 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:52.331811 | orchestrator | 2026-01-07 01:09:52 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:09:52.333601 | orchestrator | 2026-01-07 01:09:52 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:52.335214 | orchestrator | 2026-01-07 01:09:52 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:52.335269 | orchestrator | 2026-01-07 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:55.375986 | orchestrator | 2026-01-07 01:09:55 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:09:55.378799 | orchestrator | 2026-01-07 01:09:55 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:55.379260 | orchestrator | 2026-01-07 01:09:55 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:55.379281 | orchestrator | 2026-01-07 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:09:58.422553 | orchestrator | 2026-01-07 01:09:58 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:09:58.424773 | orchestrator | 2026-01-07 01:09:58 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:09:58.427250 | orchestrator | 2026-01-07 01:09:58 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:09:58.427299 | orchestrator | 2026-01-07 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:01.474276 | orchestrator | 2026-01-07 01:10:01 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:01.476168 | orchestrator | 2026-01-07 01:10:01 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:01.478159 | orchestrator | 2026-01-07 01:10:01 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:01.478201 | orchestrator | 2026-01-07 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:04.526746 | orchestrator | 2026-01-07 01:10:04 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:04.528723 | orchestrator | 2026-01-07 01:10:04 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:04.530945 | orchestrator | 2026-01-07 01:10:04 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:04.531184 | orchestrator | 2026-01-07 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:07.580914 | orchestrator | 2026-01-07 01:10:07 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:07.583256 | orchestrator | 2026-01-07 01:10:07 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:07.584753 | orchestrator | 2026-01-07 01:10:07 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:07.585043 | orchestrator | 2026-01-07 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:10.628156 | orchestrator | 2026-01-07 01:10:10 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:10.629289 | orchestrator | 2026-01-07 01:10:10 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:10.630854 | orchestrator | 2026-01-07 01:10:10 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:10.630899 | orchestrator | 2026-01-07 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:13.680398 | orchestrator | 2026-01-07 01:10:13 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:13.681646 | orchestrator | 2026-01-07 01:10:13 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:13.684163 | orchestrator | 2026-01-07 01:10:13 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:13.684204 | orchestrator | 2026-01-07 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:16.728865 | orchestrator | 2026-01-07 01:10:16 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:16.731934 | orchestrator | 2026-01-07 01:10:16 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:16.733833 | orchestrator | 2026-01-07 01:10:16 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:16.734133 | orchestrator | 2026-01-07 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:19.775422 | orchestrator | 2026-01-07 01:10:19 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:19.776426 | orchestrator | 2026-01-07 01:10:19 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:19.779708 | orchestrator | 2026-01-07 01:10:19 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:19.779739 | orchestrator | 2026-01-07 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:22.823157 | orchestrator | 2026-01-07 01:10:22 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:22.824023 | orchestrator | 2026-01-07 01:10:22 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:22.829740 | orchestrator | 2026-01-07 01:10:22 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:22.829803 | orchestrator | 2026-01-07 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:25.871282 | orchestrator | 2026-01-07 01:10:25 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:25.873349 | orchestrator | 2026-01-07 01:10:25 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:25.875712 | orchestrator | 2026-01-07 01:10:25 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:25.875794 | orchestrator | 2026-01-07 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:28.918319 | orchestrator | 2026-01-07 01:10:28 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:28.921406 | orchestrator | 2026-01-07 01:10:28 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:28.924453 | orchestrator | 2026-01-07 01:10:28 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:28.924499 | orchestrator | 2026-01-07 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:31.964914 | orchestrator | 2026-01-07 01:10:31 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:31.967067 | orchestrator | 2026-01-07 01:10:31 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:31.968835 | orchestrator | 2026-01-07 01:10:31 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:31.968929 | orchestrator | 2026-01-07 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:35.026070 | orchestrator | 2026-01-07 01:10:35 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:35.028917 | orchestrator | 2026-01-07 01:10:35 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:35.029930 | orchestrator | 2026-01-07 01:10:35 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:35.029965 | orchestrator | 2026-01-07 01:10:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:38.081160 | orchestrator | 2026-01-07 01:10:38 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:38.083012 | orchestrator | 2026-01-07 01:10:38 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:38.084443 | orchestrator | 2026-01-07 01:10:38 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:38.084550 | orchestrator | 2026-01-07 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:41.126117 | orchestrator | 2026-01-07 01:10:41 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:41.127856 | orchestrator | 2026-01-07 01:10:41 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:41.129421 | orchestrator | 2026-01-07 01:10:41 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:41.130105 | orchestrator | 2026-01-07 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:44.168199 | orchestrator | 2026-01-07 01:10:44 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:44.170877 | orchestrator | 2026-01-07 01:10:44 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:44.173167 | orchestrator | 2026-01-07 01:10:44 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:44.173255 | orchestrator | 2026-01-07 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:47.213871 | orchestrator | 2026-01-07 01:10:47 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:47.216815 | orchestrator | 2026-01-07 01:10:47 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:47.218973 | orchestrator | 2026-01-07 01:10:47 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:47.219032 | orchestrator | 2026-01-07 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:50.270404 | orchestrator | 2026-01-07 01:10:50 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:50.275871 | orchestrator | 2026-01-07 01:10:50 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:50.278144 | orchestrator | 2026-01-07 01:10:50 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:50.278210 | orchestrator | 2026-01-07 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:53.324086 | orchestrator | 2026-01-07 01:10:53 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:53.327218 | orchestrator | 2026-01-07 01:10:53 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:53.328793 | orchestrator | 2026-01-07 01:10:53 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:53.328832 | orchestrator | 2026-01-07 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:56.381266 | orchestrator | 2026-01-07 01:10:56 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:56.382579 | orchestrator | 2026-01-07 01:10:56 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:56.384087 | orchestrator | 2026-01-07 01:10:56 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:56.384129 | orchestrator | 2026-01-07 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:10:59.430854 | orchestrator | 2026-01-07 01:10:59 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:10:59.433258 | orchestrator | 2026-01-07 01:10:59 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:10:59.435880 | orchestrator | 2026-01-07 01:10:59 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:10:59.435951 | orchestrator | 2026-01-07 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:02.495125 | orchestrator | 2026-01-07 01:11:02 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:02.496679 | orchestrator | 2026-01-07 01:11:02 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:02.498361 | orchestrator | 2026-01-07 01:11:02 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:02.498408 | orchestrator | 2026-01-07 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:05.547435 | orchestrator | 2026-01-07 01:11:05 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:05.548976 | orchestrator | 2026-01-07 01:11:05 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:05.552089 | orchestrator | 2026-01-07 01:11:05 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:05.552152 | orchestrator | 2026-01-07 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:08.602971 | orchestrator | 2026-01-07 01:11:08 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:08.603052 | orchestrator | 2026-01-07 01:11:08 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:08.603060 | orchestrator | 2026-01-07 01:11:08 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:08.603064 | orchestrator | 2026-01-07 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:11.653510 | orchestrator | 2026-01-07 01:11:11 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:11.654579 | orchestrator | 2026-01-07 01:11:11 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:11.656853 | orchestrator | 2026-01-07 01:11:11 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:11.657089 | orchestrator | 2026-01-07 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:14.698542 | orchestrator | 2026-01-07 01:11:14 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:14.700021 | orchestrator | 2026-01-07 01:11:14 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:14.702236 | orchestrator | 2026-01-07 01:11:14 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:14.702504 | orchestrator | 2026-01-07 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:17.744191 | orchestrator | 2026-01-07 01:11:17 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:17.745505 | orchestrator | 2026-01-07 01:11:17 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:17.747422 | orchestrator | 2026-01-07 01:11:17 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:17.747604 | orchestrator | 2026-01-07 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:20.795932 | orchestrator | 2026-01-07 01:11:20 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:20.796854 | orchestrator | 2026-01-07 01:11:20 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:20.798049 | orchestrator | 2026-01-07 01:11:20 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:20.798091 | orchestrator | 2026-01-07 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:23.838338 | orchestrator | 2026-01-07 01:11:23 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:23.839123 | orchestrator | 2026-01-07 01:11:23 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:23.839975 | orchestrator | 2026-01-07 01:11:23 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:23.839998 | orchestrator | 2026-01-07 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:26.891853 | orchestrator | 2026-01-07 01:11:26 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:26.894936 | orchestrator | 2026-01-07 01:11:26 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:26.896524 | orchestrator | 2026-01-07 01:11:26 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:26.896809 | orchestrator | 2026-01-07 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:29.950042 | orchestrator | 2026-01-07 01:11:29 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:29.951152 | orchestrator | 2026-01-07 01:11:29 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:29.953393 | orchestrator | 2026-01-07 01:11:29 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:29.953586 | orchestrator | 2026-01-07 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:32.999261 | orchestrator | 2026-01-07 01:11:33 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:33.002786 | orchestrator | 2026-01-07 01:11:33 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:33.004544 | orchestrator | 2026-01-07 01:11:33 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:33.004762 | orchestrator | 2026-01-07 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:36.070334 | orchestrator | 2026-01-07 01:11:36 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:36.071916 | orchestrator | 2026-01-07 01:11:36 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:36.073584 | orchestrator | 2026-01-07 01:11:36 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:36.073626 | orchestrator | 2026-01-07 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:39.124633 | orchestrator | 2026-01-07 01:11:39 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:39.126115 | orchestrator | 2026-01-07 01:11:39 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:39.128550 | orchestrator | 2026-01-07 01:11:39 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:39.130108 | orchestrator | 2026-01-07 01:11:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:42.199731 | orchestrator | 2026-01-07 01:11:42 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:42.201160 | orchestrator | 2026-01-07 01:11:42 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:42.203242 | orchestrator | 2026-01-07 01:11:42 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:42.203286 | orchestrator | 2026-01-07 01:11:42 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:45.251958 | orchestrator | 2026-01-07 01:11:45 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:45.253430 | orchestrator | 2026-01-07 01:11:45 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:45.255257 | orchestrator | 2026-01-07 01:11:45 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:45.255353 | orchestrator | 2026-01-07 01:11:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:48.299175 | orchestrator | 2026-01-07 01:11:48 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:48.300718 | orchestrator | 2026-01-07 01:11:48 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:48.302547 | orchestrator | 2026-01-07 01:11:48 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:48.302595 | orchestrator | 2026-01-07 01:11:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:51.352035 | orchestrator | 2026-01-07 01:11:51 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:51.353635 | orchestrator | 2026-01-07 01:11:51 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:51.355790 | orchestrator | 2026-01-07 01:11:51 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:51.355834 | orchestrator | 2026-01-07 01:11:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:54.404769 | orchestrator | 2026-01-07 01:11:54 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:54.407587 | orchestrator | 2026-01-07 01:11:54 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:54.409469 | orchestrator | 2026-01-07 01:11:54 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:54.409503 | orchestrator | 2026-01-07 01:11:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:11:57.457542 | orchestrator | 2026-01-07 01:11:57 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:11:57.460910 | orchestrator | 2026-01-07 01:11:57 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:11:57.462734 | orchestrator | 2026-01-07 01:11:57 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:11:57.462797 | orchestrator | 2026-01-07 01:11:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:00.521232 | orchestrator | 2026-01-07 01:12:00 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:12:00.522981 | orchestrator | 2026-01-07 01:12:00 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:00.523811 | orchestrator | 2026-01-07 01:12:00 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:00.523909 | orchestrator | 2026-01-07 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:03.571120 | orchestrator | 2026-01-07 01:12:03 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:12:03.573256 | orchestrator | 2026-01-07 01:12:03 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:03.574770 | orchestrator | 2026-01-07 01:12:03 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:03.574852 | orchestrator | 2026-01-07 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:06.620649 | orchestrator | 2026-01-07 01:12:06 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:12:06.622575 | orchestrator | 2026-01-07 01:12:06 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:06.625543 | orchestrator | 2026-01-07 01:12:06 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:06.625593 | orchestrator | 2026-01-07 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:09.672844 | orchestrator | 2026-01-07 01:12:09 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state STARTED 2026-01-07 01:12:09.673918 | orchestrator | 2026-01-07 01:12:09 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:09.675230 | orchestrator | 2026-01-07 01:12:09 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:09.675313 | orchestrator | 2026-01-07 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:12.727312 | orchestrator | 2026-01-07 01:12:12.727413 | orchestrator | 2026-01-07 01:12:12.727425 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:12:12.727434 | orchestrator | 2026-01-07 01:12:12.727441 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:12:12.727449 | orchestrator | Wednesday 07 January 2026 01:09:49 +0000 (0:00:00.247) 0:00:00.247 ***** 2026-01-07 01:12:12.727512 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:12:12.727521 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:12:12.727528 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:12:12.727534 | orchestrator | 2026-01-07 01:12:12.727615 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:12:12.727624 | orchestrator | Wednesday 07 January 2026 01:09:50 +0000 (0:00:00.284) 0:00:00.532 ***** 2026-01-07 01:12:12.727631 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-07 01:12:12.727653 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-07 01:12:12.727660 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-07 01:12:12.727667 | orchestrator | 2026-01-07 01:12:12.727674 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-07 01:12:12.727681 | orchestrator | 2026-01-07 01:12:12.727687 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-07 01:12:12.727694 | orchestrator | Wednesday 07 January 2026 01:09:50 +0000 (0:00:00.435) 0:00:00.968 ***** 2026-01-07 01:12:12.727709 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:12:12.727724 | orchestrator | 2026-01-07 01:12:12.727731 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-07 01:12:12.727738 | orchestrator | Wednesday 07 January 2026 01:09:50 +0000 (0:00:00.519) 0:00:01.487 ***** 2026-01-07 01:12:12.727747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:12:12.727757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:12:12.727764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:12:12.727789 | orchestrator | 2026-01-07 01:12:12.727796 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-07 01:12:12.727802 | orchestrator | Wednesday 07 January 2026 01:09:51 +0000 (0:00:00.755) 0:00:02.243 ***** 2026-01-07 01:12:12.727843 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-01-07 01:12:12.727906 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-01-07 01:12:12.727914 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:12:12.727921 | orchestrator | 2026-01-07 01:12:12.727928 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-07 01:12:12.727935 | orchestrator | Wednesday 07 January 2026 01:09:52 +0000 (0:00:00.860) 0:00:03.104 ***** 2026-01-07 01:12:12.727942 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:12:12.727950 | orchestrator | 2026-01-07 01:12:12.727957 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-07 01:12:12.727964 | orchestrator | Wednesday 07 January 2026 01:09:53 +0000 (0:00:00.742) 0:00:03.846 ***** 2026-01-07 01:12:12.728039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:12:12.728053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:12:12.728059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:12:12.728063 | orchestrator | 2026-01-07 01:12:12.728068 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-07 01:12:12.728073 | orchestrator | Wednesday 07 January 2026 01:09:54 +0000 (0:00:01.365) 0:00:05.212 ***** 2026-01-07 01:12:12.728077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 01:12:12.728090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 01:12:12.728094 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:12:12.728099 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:12:12.728121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 01:12:12.728126 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:12:12.728131 | orchestrator | 2026-01-07 01:12:12.728136 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-07 01:12:12.728140 | orchestrator | Wednesday 07 January 2026 01:09:55 +0000 (0:00:00.382) 0:00:05.594 ***** 2026-01-07 01:12:12.728147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 01:12:12.728152 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:12:12.728157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 01:12:12.728161 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:12:12.728166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-07 01:12:12.728175 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:12:12.728179 | orchestrator | 2026-01-07 01:12:12.728184 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-07 01:12:12.728188 | orchestrator | Wednesday 07 January 2026 01:09:55 +0000 (0:00:00.807) 0:00:06.402 ***** 2026-01-07 01:12:12.728192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:12:12.728200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:12:12.728208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:12:12.728213 | orchestrator | 2026-01-07 01:12:12.728218 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-07 01:12:12.728223 | orchestrator | Wednesday 07 January 2026 01:09:57 +0000 (0:00:01.240) 0:00:07.643 ***** 2026-01-07 01:12:12.728227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:12:12.728232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:12:12.728242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:12:12.728246 | orchestrator | 2026-01-07 01:12:12.728249 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-07 01:12:12.728255 | orchestrator | Wednesday 07 January 2026 01:09:58 +0000 (0:00:01.191) 0:00:08.834 ***** 2026-01-07 01:12:12.728262 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:12:12.728268 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:12:12.728274 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:12:12.728280 | orchestrator | 2026-01-07 01:12:12.728286 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-07 01:12:12.728292 | orchestrator | Wednesday 07 January 2026 01:09:58 +0000 (0:00:00.489) 0:00:09.324 ***** 2026-01-07 01:12:12.728298 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-07 01:12:12.728304 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-07 01:12:12.728310 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-07 01:12:12.728345 | orchestrator | 2026-01-07 01:12:12.728352 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-07 01:12:12.728358 | orchestrator | Wednesday 07 January 2026 01:09:59 +0000 (0:00:01.156) 0:00:10.480 ***** 2026-01-07 01:12:12.728365 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-07 01:12:12.728376 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-07 01:12:12.728382 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-07 01:12:12.728388 | orchestrator | 2026-01-07 01:12:12.728393 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-01-07 01:12:12.728399 | orchestrator | Wednesday 07 January 2026 01:10:01 +0000 (0:00:01.243) 0:00:11.724 ***** 2026-01-07 01:12:12.728406 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:12:12.728412 | orchestrator | 2026-01-07 01:12:12.728418 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-01-07 01:12:12.728424 | orchestrator | Wednesday 07 January 2026 01:10:01 +0000 (0:00:00.708) 0:00:12.433 ***** 2026-01-07 01:12:12.728435 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-01-07 01:12:12.728441 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-01-07 01:12:12.728446 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:12:12.728452 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:12:12.728458 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:12:12.728464 | orchestrator | 2026-01-07 01:12:12.728471 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-01-07 01:12:12.728477 | orchestrator | Wednesday 07 January 2026 01:10:02 +0000 (0:00:00.718) 0:00:13.151 ***** 2026-01-07 01:12:12.728491 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:12:12.728497 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:12:12.728504 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:12:12.728510 | orchestrator | 2026-01-07 01:12:12.728517 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-07 01:12:12.728523 | orchestrator | Wednesday 07 January 2026 01:10:03 +0000 (0:00:00.544) 0:00:13.696 ***** 2026-01-07 01:12:12.728530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1320909, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.196684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1320909, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.196684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1320909, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.196684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1320976, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2101178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1320976, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2101178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1320976, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2101178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1320925, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.1994302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1320925, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.1994302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1320925, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.1994302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1320980, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.211684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1320980, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.211684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1320980, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.211684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1320945, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2036839, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1320945, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2036839, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1320945, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2036839, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1320966, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.207684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1320966, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.207684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1320966, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.207684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1320903, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.1948998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1320903, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.1948998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1320903, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.1948998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1320919, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.1974883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1320919, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.1974883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.728669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1320919, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 17677452026-01-07 01:12:12 | INFO  | Task b649c420-9de1-415c-b2af-21be7850bff1 is in state SUCCESS 2026-01-07 01:12:12.729063 | orchestrator | 157.1974883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1320929, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.1996672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1320929, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.1996672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1320929, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.1996672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1320956, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.205146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1320956, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.205146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1320956, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.205146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1320970, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.209328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1320970, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.209328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1320970, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.209328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1320921, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.1986837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1320921, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.1986837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1320921, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.1986837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1320964, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2066839, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1320964, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2066839, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1320964, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2066839, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1320950, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2044692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1320950, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2044692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1320950, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2044692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1320941, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.202684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1320941, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.202684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1320941, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.202684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1320937, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2011867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1320937, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2011867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1320937, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2011867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1320961, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2060611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1320961, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2060611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1320961, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2060611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1320933, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2006838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1320933, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2006838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1320969, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.207684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1320933, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2006838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1320969, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.207684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1321081, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2466846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1320969, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.207684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1321081, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2466846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1321009, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2216842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1321081, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2466846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1321009, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2216842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1320994, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.214961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1321009, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2216842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1320994, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.214961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1321033, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.224009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1320994, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.214961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1321033, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.224009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1320988, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2129579, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1321033, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.224009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1320988, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2129579, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1321064, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2346845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1320988, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2129579, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1321064, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2346845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1321034, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.229341, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1321064, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2346845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1321034, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.229341, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1321067, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2356844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1321034, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.229341, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1321067, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2356844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1321078, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2436845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1321067, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2356844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1321078, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2436845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1321062, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2336843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1321078, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2436845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1321062, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2336843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1321026, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2231123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1321062, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2336843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1321026, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2231123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1321007, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.217684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1321026, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2231123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1321007, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.217684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1321022, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2216842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1321022, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2216842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1321007, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.217684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1320998, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2171233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1320998, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2171233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1321022, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2216842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1321029, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.224009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1321029, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.224009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1320998, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2171233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1321076, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2436845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1321076, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2436845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1321029, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.224009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1321073, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2396846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1321073, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2396846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1321076, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2436845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1320990, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2132368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1320990, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2132368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1321073, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2396846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1320993, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.213684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1320993, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.213684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1320990, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2132368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1321058, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2326844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1321058, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2326844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1320993, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.213684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1321072, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2386844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1321072, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2386844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1321058, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2326844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1321072, 'dev': 111, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767745157.2386844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-07 01:12:12.729773 | orchestrator | 2026-01-07 01:12:12.729778 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-01-07 01:12:12.729783 | orchestrator | Wednesday 07 January 2026 01:10:39 +0000 (0:00:36.600) 0:00:50.297 ***** 2026-01-07 01:12:12.729788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:12:12.729792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:12:12.729797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-07 01:12:12.729801 | orchestrator | 2026-01-07 01:12:12.729806 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-07 01:12:12.729810 | orchestrator | Wednesday 07 January 2026 01:10:40 +0000 (0:00:01.151) 0:00:51.448 ***** 2026-01-07 01:12:12.729815 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:12:12.729819 | orchestrator | 2026-01-07 01:12:12.729823 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-07 01:12:12.729856 | orchestrator | Wednesday 07 January 2026 01:10:43 +0000 (0:00:03.015) 0:00:54.464 ***** 2026-01-07 01:12:12.729861 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:12:12.729865 | orchestrator | 2026-01-07 01:12:12.729869 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-07 01:12:12.729873 | orchestrator | Wednesday 07 January 2026 01:10:46 +0000 (0:00:02.251) 0:00:56.716 ***** 2026-01-07 01:12:12.729876 | orchestrator | 2026-01-07 01:12:12.729880 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-07 01:12:12.729884 | orchestrator | Wednesday 07 January 2026 01:10:46 +0000 (0:00:00.067) 0:00:56.784 ***** 2026-01-07 01:12:12.729888 | orchestrator | 2026-01-07 01:12:12.729894 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-07 01:12:12.729898 | orchestrator | Wednesday 07 January 2026 01:10:46 +0000 (0:00:00.059) 0:00:56.843 ***** 2026-01-07 01:12:12.729906 | orchestrator | 2026-01-07 01:12:12.729909 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-07 01:12:12.729913 | orchestrator | Wednesday 07 January 2026 01:10:46 +0000 (0:00:00.221) 0:00:57.065 ***** 2026-01-07 01:12:12.729917 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:12:12.729921 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:12:12.729924 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:12:12.729928 | orchestrator | 2026-01-07 01:12:12.729932 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-07 01:12:12.729950 | orchestrator | Wednesday 07 January 2026 01:10:48 +0000 (0:00:01.495) 0:00:58.560 ***** 2026-01-07 01:12:12.729954 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:12:12.729964 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:12:12.729968 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-07 01:12:12.729972 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-07 01:12:12.729976 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-07 01:12:12.729980 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-01-07 01:12:12.729984 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:12:12.730006 | orchestrator | 2026-01-07 01:12:12.730049 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-07 01:12:12.730055 | orchestrator | Wednesday 07 January 2026 01:11:38 +0000 (0:00:50.824) 0:01:49.384 ***** 2026-01-07 01:12:12.730059 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:12:12.730062 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:12:12.730066 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:12:12.730070 | orchestrator | 2026-01-07 01:12:12.730073 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-07 01:12:12.730077 | orchestrator | Wednesday 07 January 2026 01:12:04 +0000 (0:00:25.394) 0:02:14.779 ***** 2026-01-07 01:12:12.730081 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:12:12.730085 | orchestrator | 2026-01-07 01:12:12.730088 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-07 01:12:12.730092 | orchestrator | Wednesday 07 January 2026 01:12:06 +0000 (0:00:02.147) 0:02:16.926 ***** 2026-01-07 01:12:12.730096 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:12:12.730100 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:12:12.730103 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:12:12.730107 | orchestrator | 2026-01-07 01:12:12.730111 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-07 01:12:12.730117 | orchestrator | Wednesday 07 January 2026 01:12:06 +0000 (0:00:00.579) 0:02:17.506 ***** 2026-01-07 01:12:12.730194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-07 01:12:12.730204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-07 01:12:12.730211 | orchestrator | 2026-01-07 01:12:12.730217 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-07 01:12:12.730223 | orchestrator | Wednesday 07 January 2026 01:12:09 +0000 (0:00:02.763) 0:02:20.270 ***** 2026-01-07 01:12:12.730230 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:12:12.730235 | orchestrator | 2026-01-07 01:12:12.730240 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:12:12.730253 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:12:12.730260 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:12:12.730266 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:12:12.730272 | orchestrator | 2026-01-07 01:12:12.730278 | orchestrator | 2026-01-07 01:12:12.730284 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:12:12.730290 | orchestrator | Wednesday 07 January 2026 01:12:10 +0000 (0:00:00.272) 0:02:20.543 ***** 2026-01-07 01:12:12.730301 | orchestrator | =============================================================================== 2026-01-07 01:12:12.730307 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.82s 2026-01-07 01:12:12.730314 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.60s 2026-01-07 01:12:12.730320 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 25.39s 2026-01-07 01:12:12.730325 | orchestrator | grafana : Creating grafana database ------------------------------------- 3.02s 2026-01-07 01:12:12.730331 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.76s 2026-01-07 01:12:12.730341 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.25s 2026-01-07 01:12:12.730348 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.15s 2026-01-07 01:12:12.730354 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.50s 2026-01-07 01:12:12.730359 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.37s 2026-01-07 01:12:12.730365 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.24s 2026-01-07 01:12:12.730370 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.24s 2026-01-07 01:12:12.730375 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.19s 2026-01-07 01:12:12.730381 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.16s 2026-01-07 01:12:12.730387 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.15s 2026-01-07 01:12:12.730392 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.86s 2026-01-07 01:12:12.730398 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.81s 2026-01-07 01:12:12.730404 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.76s 2026-01-07 01:12:12.730409 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.74s 2026-01-07 01:12:12.730416 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.72s 2026-01-07 01:12:12.730421 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.71s 2026-01-07 01:12:12.730428 | orchestrator | 2026-01-07 01:12:12 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:12.733419 | orchestrator | 2026-01-07 01:12:12 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:12.733485 | orchestrator | 2026-01-07 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:15.781738 | orchestrator | 2026-01-07 01:12:15 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:15.783269 | orchestrator | 2026-01-07 01:12:15 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:15.783306 | orchestrator | 2026-01-07 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:18.840808 | orchestrator | 2026-01-07 01:12:18 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:18.841826 | orchestrator | 2026-01-07 01:12:18 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:18.841847 | orchestrator | 2026-01-07 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:21.882785 | orchestrator | 2026-01-07 01:12:21 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:21.885126 | orchestrator | 2026-01-07 01:12:21 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:21.885227 | orchestrator | 2026-01-07 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:24.930478 | orchestrator | 2026-01-07 01:12:24 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:24.931768 | orchestrator | 2026-01-07 01:12:24 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:24.933107 | orchestrator | 2026-01-07 01:12:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:27.975440 | orchestrator | 2026-01-07 01:12:27 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:27.975936 | orchestrator | 2026-01-07 01:12:27 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:27.976194 | orchestrator | 2026-01-07 01:12:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:31.028395 | orchestrator | 2026-01-07 01:12:31 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:31.029499 | orchestrator | 2026-01-07 01:12:31 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:31.029552 | orchestrator | 2026-01-07 01:12:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:34.077599 | orchestrator | 2026-01-07 01:12:34 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:34.078052 | orchestrator | 2026-01-07 01:12:34 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:34.078111 | orchestrator | 2026-01-07 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:37.116763 | orchestrator | 2026-01-07 01:12:37 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:37.119180 | orchestrator | 2026-01-07 01:12:37 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:37.119245 | orchestrator | 2026-01-07 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:40.160517 | orchestrator | 2026-01-07 01:12:40 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:40.161857 | orchestrator | 2026-01-07 01:12:40 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:40.161936 | orchestrator | 2026-01-07 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:43.208314 | orchestrator | 2026-01-07 01:12:43 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:43.209489 | orchestrator | 2026-01-07 01:12:43 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:43.209830 | orchestrator | 2026-01-07 01:12:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:46.258667 | orchestrator | 2026-01-07 01:12:46 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:46.260681 | orchestrator | 2026-01-07 01:12:46 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:46.260726 | orchestrator | 2026-01-07 01:12:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:49.297894 | orchestrator | 2026-01-07 01:12:49 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:49.299492 | orchestrator | 2026-01-07 01:12:49 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:49.299551 | orchestrator | 2026-01-07 01:12:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:52.344167 | orchestrator | 2026-01-07 01:12:52 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:52.346829 | orchestrator | 2026-01-07 01:12:52 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:52.347464 | orchestrator | 2026-01-07 01:12:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:55.389663 | orchestrator | 2026-01-07 01:12:55 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state STARTED 2026-01-07 01:12:55.391269 | orchestrator | 2026-01-07 01:12:55 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:55.391361 | orchestrator | 2026-01-07 01:12:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:12:58.441957 | orchestrator | 2026-01-07 01:12:58 | INFO  | Task 603eeb30-87ed-4341-ba6b-35d134be7aac is in state SUCCESS 2026-01-07 01:12:58.444928 | orchestrator | 2026-01-07 01:12:58 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:12:58.446914 | orchestrator | 2026-01-07 01:12:58 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:12:58.446953 | orchestrator | 2026-01-07 01:12:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:01.498393 | orchestrator | 2026-01-07 01:13:01 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:01.499279 | orchestrator | 2026-01-07 01:13:01 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:01.499309 | orchestrator | 2026-01-07 01:13:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:04.539773 | orchestrator | 2026-01-07 01:13:04 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:04.540679 | orchestrator | 2026-01-07 01:13:04 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:04.540720 | orchestrator | 2026-01-07 01:13:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:07.577497 | orchestrator | 2026-01-07 01:13:07 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:07.578904 | orchestrator | 2026-01-07 01:13:07 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:07.579610 | orchestrator | 2026-01-07 01:13:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:10.620073 | orchestrator | 2026-01-07 01:13:10 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:10.622334 | orchestrator | 2026-01-07 01:13:10 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:10.622388 | orchestrator | 2026-01-07 01:13:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:13.662475 | orchestrator | 2026-01-07 01:13:13 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:13.663712 | orchestrator | 2026-01-07 01:13:13 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:13.663777 | orchestrator | 2026-01-07 01:13:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:16.719464 | orchestrator | 2026-01-07 01:13:16 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:16.721084 | orchestrator | 2026-01-07 01:13:16 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:16.721129 | orchestrator | 2026-01-07 01:13:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:19.762833 | orchestrator | 2026-01-07 01:13:19 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:19.764008 | orchestrator | 2026-01-07 01:13:19 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:19.764056 | orchestrator | 2026-01-07 01:13:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:22.810835 | orchestrator | 2026-01-07 01:13:22 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:22.812816 | orchestrator | 2026-01-07 01:13:22 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:22.813156 | orchestrator | 2026-01-07 01:13:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:25.865662 | orchestrator | 2026-01-07 01:13:25 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:25.867162 | orchestrator | 2026-01-07 01:13:25 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:25.867315 | orchestrator | 2026-01-07 01:13:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:28.915559 | orchestrator | 2026-01-07 01:13:28 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:28.918417 | orchestrator | 2026-01-07 01:13:28 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:28.918583 | orchestrator | 2026-01-07 01:13:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:31.961744 | orchestrator | 2026-01-07 01:13:31 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:31.962604 | orchestrator | 2026-01-07 01:13:31 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:31.962634 | orchestrator | 2026-01-07 01:13:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:35.004169 | orchestrator | 2026-01-07 01:13:35 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:35.006737 | orchestrator | 2026-01-07 01:13:35 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:35.006786 | orchestrator | 2026-01-07 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:38.053196 | orchestrator | 2026-01-07 01:13:38 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:38.054551 | orchestrator | 2026-01-07 01:13:38 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:38.054620 | orchestrator | 2026-01-07 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:41.091012 | orchestrator | 2026-01-07 01:13:41 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:41.091568 | orchestrator | 2026-01-07 01:13:41 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:41.091589 | orchestrator | 2026-01-07 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:44.125928 | orchestrator | 2026-01-07 01:13:44 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:44.126583 | orchestrator | 2026-01-07 01:13:44 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:44.126643 | orchestrator | 2026-01-07 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:47.176535 | orchestrator | 2026-01-07 01:13:47 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:47.177881 | orchestrator | 2026-01-07 01:13:47 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:47.177914 | orchestrator | 2026-01-07 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:50.214601 | orchestrator | 2026-01-07 01:13:50 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:50.215128 | orchestrator | 2026-01-07 01:13:50 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:50.215177 | orchestrator | 2026-01-07 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:53.252708 | orchestrator | 2026-01-07 01:13:53 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:53.253609 | orchestrator | 2026-01-07 01:13:53 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:53.253643 | orchestrator | 2026-01-07 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:56.296525 | orchestrator | 2026-01-07 01:13:56 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:56.297403 | orchestrator | 2026-01-07 01:13:56 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:56.297442 | orchestrator | 2026-01-07 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:13:59.354292 | orchestrator | 2026-01-07 01:13:59 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:13:59.359741 | orchestrator | 2026-01-07 01:13:59 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:13:59.359834 | orchestrator | 2026-01-07 01:13:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:02.390920 | orchestrator | 2026-01-07 01:14:02 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:02.392506 | orchestrator | 2026-01-07 01:14:02 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:02.392693 | orchestrator | 2026-01-07 01:14:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:05.439128 | orchestrator | 2026-01-07 01:14:05 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:05.440730 | orchestrator | 2026-01-07 01:14:05 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:05.440775 | orchestrator | 2026-01-07 01:14:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:08.477626 | orchestrator | 2026-01-07 01:14:08 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:08.479289 | orchestrator | 2026-01-07 01:14:08 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:08.479480 | orchestrator | 2026-01-07 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:11.521630 | orchestrator | 2026-01-07 01:14:11 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:11.523042 | orchestrator | 2026-01-07 01:14:11 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:11.523095 | orchestrator | 2026-01-07 01:14:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:14.561904 | orchestrator | 2026-01-07 01:14:14 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:14.562007 | orchestrator | 2026-01-07 01:14:14 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:14.562088 | orchestrator | 2026-01-07 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:17.606542 | orchestrator | 2026-01-07 01:14:17 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:17.606907 | orchestrator | 2026-01-07 01:14:17 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:17.606925 | orchestrator | 2026-01-07 01:14:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:20.645966 | orchestrator | 2026-01-07 01:14:20 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:20.647878 | orchestrator | 2026-01-07 01:14:20 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:20.647955 | orchestrator | 2026-01-07 01:14:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:23.689225 | orchestrator | 2026-01-07 01:14:23 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:23.693449 | orchestrator | 2026-01-07 01:14:23 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:23.693492 | orchestrator | 2026-01-07 01:14:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:26.736634 | orchestrator | 2026-01-07 01:14:26 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:26.738820 | orchestrator | 2026-01-07 01:14:26 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:26.739183 | orchestrator | 2026-01-07 01:14:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:29.784879 | orchestrator | 2026-01-07 01:14:29 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:29.789513 | orchestrator | 2026-01-07 01:14:29 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:29.789564 | orchestrator | 2026-01-07 01:14:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:32.840012 | orchestrator | 2026-01-07 01:14:32 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:32.844311 | orchestrator | 2026-01-07 01:14:32 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:32.844487 | orchestrator | 2026-01-07 01:14:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:35.899333 | orchestrator | 2026-01-07 01:14:35 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:35.900949 | orchestrator | 2026-01-07 01:14:35 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:35.901004 | orchestrator | 2026-01-07 01:14:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:38.946676 | orchestrator | 2026-01-07 01:14:38 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:38.947913 | orchestrator | 2026-01-07 01:14:38 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:38.947958 | orchestrator | 2026-01-07 01:14:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:41.990695 | orchestrator | 2026-01-07 01:14:41 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:41.990753 | orchestrator | 2026-01-07 01:14:41 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:41.990762 | orchestrator | 2026-01-07 01:14:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:45.048159 | orchestrator | 2026-01-07 01:14:45 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:45.049272 | orchestrator | 2026-01-07 01:14:45 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:45.049311 | orchestrator | 2026-01-07 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:48.108751 | orchestrator | 2026-01-07 01:14:48 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:48.111721 | orchestrator | 2026-01-07 01:14:48 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:48.111897 | orchestrator | 2026-01-07 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:51.161844 | orchestrator | 2026-01-07 01:14:51 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:51.163605 | orchestrator | 2026-01-07 01:14:51 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:51.163659 | orchestrator | 2026-01-07 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:54.225862 | orchestrator | 2026-01-07 01:14:54 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:54.228139 | orchestrator | 2026-01-07 01:14:54 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:54.228189 | orchestrator | 2026-01-07 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:14:57.274822 | orchestrator | 2026-01-07 01:14:57 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:14:57.278102 | orchestrator | 2026-01-07 01:14:57 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:14:57.278213 | orchestrator | 2026-01-07 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:00.332249 | orchestrator | 2026-01-07 01:15:00 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:00.334674 | orchestrator | 2026-01-07 01:15:00 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:00.334735 | orchestrator | 2026-01-07 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:03.376767 | orchestrator | 2026-01-07 01:15:03 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:03.376885 | orchestrator | 2026-01-07 01:15:03 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:03.376895 | orchestrator | 2026-01-07 01:15:03 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:06.420855 | orchestrator | 2026-01-07 01:15:06 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:06.422229 | orchestrator | 2026-01-07 01:15:06 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:06.422430 | orchestrator | 2026-01-07 01:15:06 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:09.480358 | orchestrator | 2026-01-07 01:15:09 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:09.481179 | orchestrator | 2026-01-07 01:15:09 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:09.481222 | orchestrator | 2026-01-07 01:15:09 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:12.535761 | orchestrator | 2026-01-07 01:15:12 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:12.538307 | orchestrator | 2026-01-07 01:15:12 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:12.538352 | orchestrator | 2026-01-07 01:15:12 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:15.590540 | orchestrator | 2026-01-07 01:15:15 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:15.592792 | orchestrator | 2026-01-07 01:15:15 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:15.592843 | orchestrator | 2026-01-07 01:15:15 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:18.640440 | orchestrator | 2026-01-07 01:15:18 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:18.643640 | orchestrator | 2026-01-07 01:15:18 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:18.643691 | orchestrator | 2026-01-07 01:15:18 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:21.696056 | orchestrator | 2026-01-07 01:15:21 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:21.697729 | orchestrator | 2026-01-07 01:15:21 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:21.697840 | orchestrator | 2026-01-07 01:15:21 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:24.747119 | orchestrator | 2026-01-07 01:15:24 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:24.749865 | orchestrator | 2026-01-07 01:15:24 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:24.749918 | orchestrator | 2026-01-07 01:15:24 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:27.783346 | orchestrator | 2026-01-07 01:15:27 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:27.785384 | orchestrator | 2026-01-07 01:15:27 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:27.785434 | orchestrator | 2026-01-07 01:15:27 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:30.834357 | orchestrator | 2026-01-07 01:15:30 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:30.835586 | orchestrator | 2026-01-07 01:15:30 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:30.835633 | orchestrator | 2026-01-07 01:15:30 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:33.881441 | orchestrator | 2026-01-07 01:15:33 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:33.881509 | orchestrator | 2026-01-07 01:15:33 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:33.881518 | orchestrator | 2026-01-07 01:15:33 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:36.903113 | orchestrator | 2026-01-07 01:15:36 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:36.904119 | orchestrator | 2026-01-07 01:15:36 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:36.904160 | orchestrator | 2026-01-07 01:15:36 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:39.952402 | orchestrator | 2026-01-07 01:15:39 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:39.955643 | orchestrator | 2026-01-07 01:15:39 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:39.955734 | orchestrator | 2026-01-07 01:15:39 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:43.001033 | orchestrator | 2026-01-07 01:15:43 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:43.003299 | orchestrator | 2026-01-07 01:15:43 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:43.004171 | orchestrator | 2026-01-07 01:15:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:46.052986 | orchestrator | 2026-01-07 01:15:46 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:46.054337 | orchestrator | 2026-01-07 01:15:46 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:46.054666 | orchestrator | 2026-01-07 01:15:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:49.096899 | orchestrator | 2026-01-07 01:15:49 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:49.097937 | orchestrator | 2026-01-07 01:15:49 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:49.097975 | orchestrator | 2026-01-07 01:15:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:52.138333 | orchestrator | 2026-01-07 01:15:52 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:52.140954 | orchestrator | 2026-01-07 01:15:52 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:52.141083 | orchestrator | 2026-01-07 01:15:52 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:55.173853 | orchestrator | 2026-01-07 01:15:55 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:55.174589 | orchestrator | 2026-01-07 01:15:55 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:55.174627 | orchestrator | 2026-01-07 01:15:55 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:15:58.205434 | orchestrator | 2026-01-07 01:15:58 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:15:58.206491 | orchestrator | 2026-01-07 01:15:58 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:15:58.206761 | orchestrator | 2026-01-07 01:15:58 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:01.261069 | orchestrator | 2026-01-07 01:16:01 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:01.261875 | orchestrator | 2026-01-07 01:16:01 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:01.262136 | orchestrator | 2026-01-07 01:16:01 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:04.303884 | orchestrator | 2026-01-07 01:16:04 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:04.305669 | orchestrator | 2026-01-07 01:16:04 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:04.305713 | orchestrator | 2026-01-07 01:16:04 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:07.349879 | orchestrator | 2026-01-07 01:16:07 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:07.351963 | orchestrator | 2026-01-07 01:16:07 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:07.352020 | orchestrator | 2026-01-07 01:16:07 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:10.394676 | orchestrator | 2026-01-07 01:16:10 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:10.395833 | orchestrator | 2026-01-07 01:16:10 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:10.395860 | orchestrator | 2026-01-07 01:16:10 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:13.434325 | orchestrator | 2026-01-07 01:16:13 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:13.435187 | orchestrator | 2026-01-07 01:16:13 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:13.435381 | orchestrator | 2026-01-07 01:16:13 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:16.491340 | orchestrator | 2026-01-07 01:16:16 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:16.492025 | orchestrator | 2026-01-07 01:16:16 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:16.492059 | orchestrator | 2026-01-07 01:16:16 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:19.527032 | orchestrator | 2026-01-07 01:16:19 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:19.527127 | orchestrator | 2026-01-07 01:16:19 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:19.527140 | orchestrator | 2026-01-07 01:16:19 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:22.563024 | orchestrator | 2026-01-07 01:16:22 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:22.563947 | orchestrator | 2026-01-07 01:16:22 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:22.563989 | orchestrator | 2026-01-07 01:16:22 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:25.617137 | orchestrator | 2026-01-07 01:16:25 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:25.617255 | orchestrator | 2026-01-07 01:16:25 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:25.618713 | orchestrator | 2026-01-07 01:16:25 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:28.643408 | orchestrator | 2026-01-07 01:16:28 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:28.643768 | orchestrator | 2026-01-07 01:16:28 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:28.643785 | orchestrator | 2026-01-07 01:16:28 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:31.687711 | orchestrator | 2026-01-07 01:16:31 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:31.689254 | orchestrator | 2026-01-07 01:16:31 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:31.689313 | orchestrator | 2026-01-07 01:16:31 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:34.731242 | orchestrator | 2026-01-07 01:16:34 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:34.732724 | orchestrator | 2026-01-07 01:16:34 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:34.732769 | orchestrator | 2026-01-07 01:16:34 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:37.779601 | orchestrator | 2026-01-07 01:16:37 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:37.780351 | orchestrator | 2026-01-07 01:16:37 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:37.780387 | orchestrator | 2026-01-07 01:16:37 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:40.828150 | orchestrator | 2026-01-07 01:16:40 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:40.828825 | orchestrator | 2026-01-07 01:16:40 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:40.828852 | orchestrator | 2026-01-07 01:16:40 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:43.878720 | orchestrator | 2026-01-07 01:16:43 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:43.879899 | orchestrator | 2026-01-07 01:16:43 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:43.880008 | orchestrator | 2026-01-07 01:16:43 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:46.932659 | orchestrator | 2026-01-07 01:16:46 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:46.934538 | orchestrator | 2026-01-07 01:16:46 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:46.934831 | orchestrator | 2026-01-07 01:16:46 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:49.988230 | orchestrator | 2026-01-07 01:16:49 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:49.990208 | orchestrator | 2026-01-07 01:16:49 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:49.990271 | orchestrator | 2026-01-07 01:16:49 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:53.035393 | orchestrator | 2026-01-07 01:16:53 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:53.035621 | orchestrator | 2026-01-07 01:16:53 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:53.035643 | orchestrator | 2026-01-07 01:16:53 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:56.083219 | orchestrator | 2026-01-07 01:16:56 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:56.083729 | orchestrator | 2026-01-07 01:16:56 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:56.083748 | orchestrator | 2026-01-07 01:16:56 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:16:59.134720 | orchestrator | 2026-01-07 01:16:59 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:16:59.135774 | orchestrator | 2026-01-07 01:16:59 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:16:59.135842 | orchestrator | 2026-01-07 01:16:59 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:02.200483 | orchestrator | 2026-01-07 01:17:02 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:17:02.202966 | orchestrator | 2026-01-07 01:17:02 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:17:02.203026 | orchestrator | 2026-01-07 01:17:02 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:05.264019 | orchestrator | 2026-01-07 01:17:05 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:17:05.267314 | orchestrator | 2026-01-07 01:17:05 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:17:05.267367 | orchestrator | 2026-01-07 01:17:05 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:08.318527 | orchestrator | 2026-01-07 01:17:08 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state STARTED 2026-01-07 01:17:08.320224 | orchestrator | 2026-01-07 01:17:08 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:17:08.320542 | orchestrator | 2026-01-07 01:17:08 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:11.375028 | orchestrator | 2026-01-07 01:17:11 | INFO  | Task 5b3b81d0-8d93-4587-85f8-23a0b05c7c63 is in state SUCCESS 2026-01-07 01:17:11.376033 | orchestrator | 2026-01-07 01:17:11.376062 | orchestrator | 2026-01-07 01:17:11.376068 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:17:11.376085 | orchestrator | 2026-01-07 01:17:11.376090 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:17:11.376094 | orchestrator | Wednesday 07 January 2026 01:09:06 +0000 (0:00:00.127) 0:00:00.128 ***** 2026-01-07 01:17:11.376098 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:11.376102 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:17:11.376106 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:17:11.376110 | orchestrator | 2026-01-07 01:17:11.376114 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:17:11.376117 | orchestrator | Wednesday 07 January 2026 01:09:06 +0000 (0:00:00.217) 0:00:00.345 ***** 2026-01-07 01:17:11.376121 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-01-07 01:17:11.376125 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-01-07 01:17:11.376129 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-01-07 01:17:11.376132 | orchestrator | 2026-01-07 01:17:11.376136 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-01-07 01:17:11.376140 | orchestrator | 2026-01-07 01:17:11.376143 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-01-07 01:17:11.376147 | orchestrator | Wednesday 07 January 2026 01:09:07 +0000 (0:00:00.498) 0:00:00.844 ***** 2026-01-07 01:17:11.376151 | orchestrator | 2026-01-07 01:17:11.376154 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-01-07 01:17:11.376158 | orchestrator | 2026-01-07 01:17:11.376162 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-01-07 01:17:11.376166 | orchestrator | 2026-01-07 01:17:11.376169 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-01-07 01:17:11.376173 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:11.376177 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:17:11.376180 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:17:11.376184 | orchestrator | 2026-01-07 01:17:11.376188 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:17:11.376192 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:17:11.376197 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:17:11.376201 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:17:11.376205 | orchestrator | 2026-01-07 01:17:11.376209 | orchestrator | 2026-01-07 01:17:11.376212 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:17:11.376216 | orchestrator | Wednesday 07 January 2026 01:12:55 +0000 (0:03:48.756) 0:03:49.601 ***** 2026-01-07 01:17:11.376220 | orchestrator | =============================================================================== 2026-01-07 01:17:11.376224 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 228.76s 2026-01-07 01:17:11.376227 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2026-01-07 01:17:11.376231 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.22s 2026-01-07 01:17:11.376235 | orchestrator | 2026-01-07 01:17:11.376239 | orchestrator | 2026-01-07 01:17:11.376242 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:17:11.376246 | orchestrator | 2026-01-07 01:17:11.376250 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-01-07 01:17:11.376253 | orchestrator | Wednesday 07 January 2026 01:08:58 +0000 (0:00:00.263) 0:00:00.263 ***** 2026-01-07 01:17:11.376273 | orchestrator | changed: [testbed-manager] 2026-01-07 01:17:11.376278 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.376283 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:11.376289 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:11.376297 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:11.376307 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:11.376318 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:11.376325 | orchestrator | 2026-01-07 01:17:11.376331 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:17:11.376337 | orchestrator | Wednesday 07 January 2026 01:08:59 +0000 (0:00:00.809) 0:00:01.073 ***** 2026-01-07 01:17:11.376343 | orchestrator | changed: [testbed-manager] 2026-01-07 01:17:11.376349 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.376355 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:11.376361 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:11.376367 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:11.376373 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:11.376379 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:11.376385 | orchestrator | 2026-01-07 01:17:11.376388 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:17:11.376392 | orchestrator | Wednesday 07 January 2026 01:09:00 +0000 (0:00:00.741) 0:00:01.814 ***** 2026-01-07 01:17:11.376396 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-01-07 01:17:11.376400 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-01-07 01:17:11.376404 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-01-07 01:17:11.376407 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-01-07 01:17:11.376411 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-01-07 01:17:11.376415 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-01-07 01:17:11.376418 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-01-07 01:17:11.376422 | orchestrator | 2026-01-07 01:17:11.376426 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-01-07 01:17:11.376430 | orchestrator | 2026-01-07 01:17:11.376460 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-07 01:17:11.376465 | orchestrator | Wednesday 07 January 2026 01:09:01 +0000 (0:00:00.887) 0:00:02.702 ***** 2026-01-07 01:17:11.376476 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:11.376482 | orchestrator | 2026-01-07 01:17:11.376491 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-01-07 01:17:11.376499 | orchestrator | Wednesday 07 January 2026 01:09:02 +0000 (0:00:00.720) 0:00:03.423 ***** 2026-01-07 01:17:11.376505 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-01-07 01:17:11.376511 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-01-07 01:17:11.376517 | orchestrator | 2026-01-07 01:17:11.376523 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-01-07 01:17:11.376529 | orchestrator | Wednesday 07 January 2026 01:09:05 +0000 (0:00:03.675) 0:00:07.098 ***** 2026-01-07 01:17:11.376535 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 01:17:11.376541 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-07 01:17:11.376547 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.376554 | orchestrator | 2026-01-07 01:17:11.376559 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-07 01:17:11.376563 | orchestrator | Wednesday 07 January 2026 01:09:09 +0000 (0:00:03.832) 0:00:10.931 ***** 2026-01-07 01:17:11.376567 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.376570 | orchestrator | 2026-01-07 01:17:11.376574 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-01-07 01:17:11.376578 | orchestrator | Wednesday 07 January 2026 01:09:10 +0000 (0:00:00.651) 0:00:11.582 ***** 2026-01-07 01:17:11.376582 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.376585 | orchestrator | 2026-01-07 01:17:11.376589 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-01-07 01:17:11.376593 | orchestrator | Wednesday 07 January 2026 01:09:11 +0000 (0:00:01.476) 0:00:13.059 ***** 2026-01-07 01:17:11.376597 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.376600 | orchestrator | 2026-01-07 01:17:11.376608 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-07 01:17:11.376612 | orchestrator | Wednesday 07 January 2026 01:09:14 +0000 (0:00:02.825) 0:00:15.885 ***** 2026-01-07 01:17:11.376615 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.376619 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.376623 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.376627 | orchestrator | 2026-01-07 01:17:11.376630 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-07 01:17:11.376634 | orchestrator | Wednesday 07 January 2026 01:09:14 +0000 (0:00:00.320) 0:00:16.205 ***** 2026-01-07 01:17:11.376638 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:11.376642 | orchestrator | 2026-01-07 01:17:11.376645 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-01-07 01:17:11.376649 | orchestrator | Wednesday 07 January 2026 01:09:43 +0000 (0:00:28.753) 0:00:44.959 ***** 2026-01-07 01:17:11.376653 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.376658 | orchestrator | 2026-01-07 01:17:11.376662 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-07 01:17:11.376667 | orchestrator | Wednesday 07 January 2026 01:09:57 +0000 (0:00:14.188) 0:00:59.148 ***** 2026-01-07 01:17:11.376671 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:11.376675 | orchestrator | 2026-01-07 01:17:11.376680 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-07 01:17:11.376684 | orchestrator | Wednesday 07 January 2026 01:10:11 +0000 (0:00:13.972) 0:01:13.120 ***** 2026-01-07 01:17:11.376688 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:11.376692 | orchestrator | 2026-01-07 01:17:11.376696 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-01-07 01:17:11.376701 | orchestrator | Wednesday 07 January 2026 01:10:12 +0000 (0:00:01.190) 0:01:14.310 ***** 2026-01-07 01:17:11.376705 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.376709 | orchestrator | 2026-01-07 01:17:11.376751 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-07 01:17:11.376756 | orchestrator | Wednesday 07 January 2026 01:10:13 +0000 (0:00:00.469) 0:01:14.780 ***** 2026-01-07 01:17:11.376760 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:11.376764 | orchestrator | 2026-01-07 01:17:11.376769 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-07 01:17:11.376773 | orchestrator | Wednesday 07 January 2026 01:10:13 +0000 (0:00:00.512) 0:01:15.292 ***** 2026-01-07 01:17:11.376777 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:11.376782 | orchestrator | 2026-01-07 01:17:11.376787 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-07 01:17:11.376794 | orchestrator | Wednesday 07 January 2026 01:10:32 +0000 (0:00:19.069) 0:01:34.362 ***** 2026-01-07 01:17:11.376801 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.376807 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.376813 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.376819 | orchestrator | 2026-01-07 01:17:11.376858 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-01-07 01:17:11.376866 | orchestrator | 2026-01-07 01:17:11.376874 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-07 01:17:11.376881 | orchestrator | Wednesday 07 January 2026 01:10:33 +0000 (0:00:00.413) 0:01:34.776 ***** 2026-01-07 01:17:11.376889 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:11.376894 | orchestrator | 2026-01-07 01:17:11.376899 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-01-07 01:17:11.376903 | orchestrator | Wednesday 07 January 2026 01:10:33 +0000 (0:00:00.564) 0:01:35.340 ***** 2026-01-07 01:17:11.376907 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.376912 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.376916 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.376924 | orchestrator | 2026-01-07 01:17:11.376929 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-01-07 01:17:11.376933 | orchestrator | Wednesday 07 January 2026 01:10:35 +0000 (0:00:01.746) 0:01:37.087 ***** 2026-01-07 01:17:11.376937 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.376942 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.376950 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.376955 | orchestrator | 2026-01-07 01:17:11.376959 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-07 01:17:11.376964 | orchestrator | Wednesday 07 January 2026 01:10:37 +0000 (0:00:01.982) 0:01:39.070 ***** 2026-01-07 01:17:11.376968 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.376972 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.376976 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.376981 | orchestrator | 2026-01-07 01:17:11.376985 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-07 01:17:11.376990 | orchestrator | Wednesday 07 January 2026 01:10:38 +0000 (0:00:00.353) 0:01:39.423 ***** 2026-01-07 01:17:11.376994 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-07 01:17:11.376999 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.377005 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-07 01:17:11.377011 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.377018 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-07 01:17:11.377024 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-01-07 01:17:11.377030 | orchestrator | 2026-01-07 01:17:11.377035 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-07 01:17:11.377041 | orchestrator | Wednesday 07 January 2026 01:10:47 +0000 (0:00:09.097) 0:01:48.521 ***** 2026-01-07 01:17:11.377047 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.377053 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.377059 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.377066 | orchestrator | 2026-01-07 01:17:11.377073 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-07 01:17:11.377079 | orchestrator | Wednesday 07 January 2026 01:10:47 +0000 (0:00:00.353) 0:01:48.875 ***** 2026-01-07 01:17:11.377086 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-07 01:17:11.377092 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.377099 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-07 01:17:11.377104 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.377108 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-07 01:17:11.377111 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.377115 | orchestrator | 2026-01-07 01:17:11.377119 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-07 01:17:11.377123 | orchestrator | Wednesday 07 January 2026 01:10:48 +0000 (0:00:00.713) 0:01:49.589 ***** 2026-01-07 01:17:11.377126 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.377130 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.377134 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.377137 | orchestrator | 2026-01-07 01:17:11.377141 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-01-07 01:17:11.377145 | orchestrator | Wednesday 07 January 2026 01:10:48 +0000 (0:00:00.676) 0:01:50.266 ***** 2026-01-07 01:17:11.377148 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.377152 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.377156 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.377161 | orchestrator | 2026-01-07 01:17:11.377168 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-01-07 01:17:11.377174 | orchestrator | Wednesday 07 January 2026 01:10:49 +0000 (0:00:00.843) 0:01:51.109 ***** 2026-01-07 01:17:11.377180 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.377186 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.377193 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.377204 | orchestrator | 2026-01-07 01:17:11.377210 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-01-07 01:17:11.377216 | orchestrator | Wednesday 07 January 2026 01:10:51 +0000 (0:00:02.162) 0:01:53.272 ***** 2026-01-07 01:17:11.377222 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.377232 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.377239 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:11.377245 | orchestrator | 2026-01-07 01:17:11.377251 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-07 01:17:11.377257 | orchestrator | Wednesday 07 January 2026 01:11:18 +0000 (0:00:26.808) 0:02:20.081 ***** 2026-01-07 01:17:11.377264 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.377270 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.377276 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:11.377282 | orchestrator | 2026-01-07 01:17:11.377288 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-07 01:17:11.377297 | orchestrator | Wednesday 07 January 2026 01:11:30 +0000 (0:00:11.868) 0:02:31.949 ***** 2026-01-07 01:17:11.377305 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:11.377311 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.377317 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.377324 | orchestrator | 2026-01-07 01:17:11.377330 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-01-07 01:17:11.377336 | orchestrator | Wednesday 07 January 2026 01:11:31 +0000 (0:00:01.138) 0:02:33.088 ***** 2026-01-07 01:17:11.377343 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.377347 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.377351 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.377354 | orchestrator | 2026-01-07 01:17:11.377358 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-01-07 01:17:11.377362 | orchestrator | Wednesday 07 January 2026 01:11:45 +0000 (0:00:13.783) 0:02:46.871 ***** 2026-01-07 01:17:11.377366 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.377369 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.377373 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.377377 | orchestrator | 2026-01-07 01:17:11.377380 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-07 01:17:11.377384 | orchestrator | Wednesday 07 January 2026 01:11:46 +0000 (0:00:01.046) 0:02:47.918 ***** 2026-01-07 01:17:11.377388 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.377391 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.377395 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.377399 | orchestrator | 2026-01-07 01:17:11.377402 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-01-07 01:17:11.377406 | orchestrator | 2026-01-07 01:17:11.377415 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-07 01:17:11.377419 | orchestrator | Wednesday 07 January 2026 01:11:47 +0000 (0:00:00.595) 0:02:48.514 ***** 2026-01-07 01:17:11.377422 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:11.377426 | orchestrator | 2026-01-07 01:17:11.377430 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-01-07 01:17:11.377434 | orchestrator | Wednesday 07 January 2026 01:11:47 +0000 (0:00:00.559) 0:02:49.074 ***** 2026-01-07 01:17:11.377438 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-01-07 01:17:11.377442 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-01-07 01:17:11.377445 | orchestrator | 2026-01-07 01:17:11.377449 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-01-07 01:17:11.377453 | orchestrator | Wednesday 07 January 2026 01:11:50 +0000 (0:00:02.771) 0:02:51.845 ***** 2026-01-07 01:17:11.377457 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-01-07 01:17:11.377465 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-01-07 01:17:11.377469 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-01-07 01:17:11.377473 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-01-07 01:17:11.377477 | orchestrator | 2026-01-07 01:17:11.377481 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-01-07 01:17:11.377484 | orchestrator | Wednesday 07 January 2026 01:11:56 +0000 (0:00:05.567) 0:02:57.413 ***** 2026-01-07 01:17:11.377488 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:17:11.377492 | orchestrator | 2026-01-07 01:17:11.377496 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-01-07 01:17:11.377499 | orchestrator | Wednesday 07 January 2026 01:11:59 +0000 (0:00:03.424) 0:03:00.838 ***** 2026-01-07 01:17:11.377503 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:17:11.377507 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-01-07 01:17:11.377511 | orchestrator | 2026-01-07 01:17:11.377514 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-01-07 01:17:11.377518 | orchestrator | Wednesday 07 January 2026 01:12:03 +0000 (0:00:04.435) 0:03:05.273 ***** 2026-01-07 01:17:11.377522 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:17:11.377526 | orchestrator | 2026-01-07 01:17:11.377529 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-01-07 01:17:11.377533 | orchestrator | Wednesday 07 January 2026 01:12:07 +0000 (0:00:03.300) 0:03:08.574 ***** 2026-01-07 01:17:11.377537 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-01-07 01:17:11.377540 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-01-07 01:17:11.377544 | orchestrator | 2026-01-07 01:17:11.377548 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-07 01:17:11.377551 | orchestrator | Wednesday 07 January 2026 01:12:15 +0000 (0:00:08.015) 0:03:16.590 ***** 2026-01-07 01:17:11.377561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:11.377573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:11.377586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:11.377599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.377607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.377613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.377620 | orchestrator | 2026-01-07 01:17:11.377626 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-01-07 01:17:11.377632 | orchestrator | Wednesday 07 January 2026 01:12:16 +0000 (0:00:01.327) 0:03:17.917 ***** 2026-01-07 01:17:11.377644 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.377650 | orchestrator | 2026-01-07 01:17:11.377657 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-01-07 01:17:11.377667 | orchestrator | Wednesday 07 January 2026 01:12:16 +0000 (0:00:00.141) 0:03:18.059 ***** 2026-01-07 01:17:11.377673 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.377680 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.377687 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.377691 | orchestrator | 2026-01-07 01:17:11.377698 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-01-07 01:17:11.377704 | orchestrator | Wednesday 07 January 2026 01:12:16 +0000 (0:00:00.336) 0:03:18.395 ***** 2026-01-07 01:17:11.377711 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-07 01:17:11.377737 | orchestrator | 2026-01-07 01:17:11.377744 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-01-07 01:17:11.377751 | orchestrator | Wednesday 07 January 2026 01:12:17 +0000 (0:00:00.949) 0:03:19.345 ***** 2026-01-07 01:17:11.377758 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.377764 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.377771 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.377778 | orchestrator | 2026-01-07 01:17:11.377785 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-07 01:17:11.377792 | orchestrator | Wednesday 07 January 2026 01:12:18 +0000 (0:00:00.340) 0:03:19.685 ***** 2026-01-07 01:17:11.377800 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:11.377813 | orchestrator | 2026-01-07 01:17:11.377824 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-07 01:17:11.377831 | orchestrator | Wednesday 07 January 2026 01:12:18 +0000 (0:00:00.542) 0:03:20.227 ***** 2026-01-07 01:17:11.377838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:11.377849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:11.377869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:11.377877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.377884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.377890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.377896 | orchestrator | 2026-01-07 01:17:11.377905 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-07 01:17:11.377911 | orchestrator | Wednesday 07 January 2026 01:12:21 +0000 (0:00:02.298) 0:03:22.526 ***** 2026-01-07 01:17:11.377917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:11.377931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.377938 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.377944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:11.377951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.377957 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.377966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:11.378212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.378230 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.378236 | orchestrator | 2026-01-07 01:17:11.378242 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-07 01:17:11.378248 | orchestrator | Wednesday 07 January 2026 01:12:22 +0000 (0:00:00.878) 0:03:23.404 ***** 2026-01-07 01:17:11.378255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:11.378262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.378267 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.378278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:11.378291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.378298 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.378309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:11.378317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.378323 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.378329 | orchestrator | 2026-01-07 01:17:11.378335 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-01-07 01:17:11.378341 | orchestrator | Wednesday 07 January 2026 01:12:22 +0000 (0:00:00.823) 0:03:24.228 ***** 2026-01-07 01:17:11.378350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:11.378364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:11.378372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:11.378378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.378391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.378398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.378405 | orchestrator | 2026-01-07 01:17:11.378411 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-01-07 01:17:11.378416 | orchestrator | Wednesday 07 January 2026 01:12:25 +0000 (0:00:02.256) 0:03:26.484 ***** 2026-01-07 01:17:11.378427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:11.378434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:11.378448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:11.378455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.378465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.378472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.378478 | orchestrator | 2026-01-07 01:17:11.378484 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-01-07 01:17:11.378490 | orchestrator | Wednesday 07 January 2026 01:12:30 +0000 (0:00:05.739) 0:03:32.224 ***** 2026-01-07 01:17:11.378497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:11.378510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.378517 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.378527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:11.378534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.378541 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.378547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-07 01:17:11.378563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.378569 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.378575 | orchestrator | 2026-01-07 01:17:11.378581 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-01-07 01:17:11.378587 | orchestrator | Wednesday 07 January 2026 01:12:31 +0000 (0:00:00.624) 0:03:32.848 ***** 2026-01-07 01:17:11.378593 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.378599 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:11.378605 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:11.378611 | orchestrator | 2026-01-07 01:17:11.378618 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-01-07 01:17:11.378624 | orchestrator | Wednesday 07 January 2026 01:12:32 +0000 (0:00:01.535) 0:03:34.384 ***** 2026-01-07 01:17:11.378630 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.378636 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.378643 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.378649 | orchestrator | 2026-01-07 01:17:11.378656 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-01-07 01:17:11.378662 | orchestrator | Wednesday 07 January 2026 01:12:33 +0000 (0:00:00.367) 0:03:34.751 ***** 2026-01-07 01:17:11.378673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:11.378680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:11.378694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:11.378702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.378725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.378732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.378739 | orchestrator | 2026-01-07 01:17:11.378745 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-07 01:17:11.378755 | orchestrator | Wednesday 07 January 2026 01:12:35 +0000 (0:00:01.982) 0:03:36.734 ***** 2026-01-07 01:17:11.378761 | orchestrator | 2026-01-07 01:17:11.378768 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-07 01:17:11.378774 | orchestrator | Wednesday 07 January 2026 01:12:35 +0000 (0:00:00.132) 0:03:36.866 ***** 2026-01-07 01:17:11.378780 | orchestrator | 2026-01-07 01:17:11.378786 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-07 01:17:11.378793 | orchestrator | Wednesday 07 January 2026 01:12:35 +0000 (0:00:00.126) 0:03:36.992 ***** 2026-01-07 01:17:11.378799 | orchestrator | 2026-01-07 01:17:11.378806 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-01-07 01:17:11.378812 | orchestrator | Wednesday 07 January 2026 01:12:35 +0000 (0:00:00.126) 0:03:37.119 ***** 2026-01-07 01:17:11.378818 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.378823 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:11.378829 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:11.378835 | orchestrator | 2026-01-07 01:17:11.378841 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-01-07 01:17:11.378848 | orchestrator | Wednesday 07 January 2026 01:12:48 +0000 (0:00:12.973) 0:03:50.092 ***** 2026-01-07 01:17:11.378879 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.378927 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:11.378944 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:11.378951 | orchestrator | 2026-01-07 01:17:11.378958 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-01-07 01:17:11.378965 | orchestrator | 2026-01-07 01:17:11.378971 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-07 01:17:11.378978 | orchestrator | Wednesday 07 January 2026 01:12:54 +0000 (0:00:05.338) 0:03:55.431 ***** 2026-01-07 01:17:11.378985 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:11.378992 | orchestrator | 2026-01-07 01:17:11.379003 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-07 01:17:11.379010 | orchestrator | Wednesday 07 January 2026 01:12:55 +0000 (0:00:01.304) 0:03:56.735 ***** 2026-01-07 01:17:11.379017 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.379025 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.379033 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.379040 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.379050 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.379057 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.379063 | orchestrator | 2026-01-07 01:17:11.379069 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-01-07 01:17:11.379075 | orchestrator | Wednesday 07 January 2026 01:12:55 +0000 (0:00:00.625) 0:03:57.360 ***** 2026-01-07 01:17:11.379082 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.379087 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.379094 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.379100 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:17:11.379106 | orchestrator | 2026-01-07 01:17:11.379113 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-07 01:17:11.379130 | orchestrator | Wednesday 07 January 2026 01:12:57 +0000 (0:00:01.114) 0:03:58.475 ***** 2026-01-07 01:17:11.379142 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-01-07 01:17:11.379150 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-01-07 01:17:11.379168 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-01-07 01:17:11.379176 | orchestrator | 2026-01-07 01:17:11.379592 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-07 01:17:11.379610 | orchestrator | Wednesday 07 January 2026 01:12:57 +0000 (0:00:00.626) 0:03:59.102 ***** 2026-01-07 01:17:11.379616 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-01-07 01:17:11.379629 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-01-07 01:17:11.379636 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-01-07 01:17:11.379642 | orchestrator | 2026-01-07 01:17:11.379648 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-07 01:17:11.379655 | orchestrator | Wednesday 07 January 2026 01:12:58 +0000 (0:00:01.204) 0:04:00.306 ***** 2026-01-07 01:17:11.379661 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-01-07 01:17:11.379667 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.379673 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-01-07 01:17:11.379679 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.379685 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-01-07 01:17:11.379691 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.379698 | orchestrator | 2026-01-07 01:17:11.379704 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-01-07 01:17:11.379711 | orchestrator | Wednesday 07 January 2026 01:12:59 +0000 (0:00:00.528) 0:04:00.835 ***** 2026-01-07 01:17:11.379748 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 01:17:11.379756 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 01:17:11.379762 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.379769 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 01:17:11.379775 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 01:17:11.379781 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.379788 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-07 01:17:11.379795 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-07 01:17:11.379801 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-07 01:17:11.379808 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-07 01:17:11.379814 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.379821 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-07 01:17:11.379827 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-07 01:17:11.379833 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-07 01:17:11.379837 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-07 01:17:11.379841 | orchestrator | 2026-01-07 01:17:11.379844 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-01-07 01:17:11.379848 | orchestrator | Wednesday 07 January 2026 01:13:01 +0000 (0:00:02.161) 0:04:02.996 ***** 2026-01-07 01:17:11.379852 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.379855 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.379859 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.379863 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:11.379867 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:11.379870 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:11.379874 | orchestrator | 2026-01-07 01:17:11.379878 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-01-07 01:17:11.379881 | orchestrator | Wednesday 07 January 2026 01:13:02 +0000 (0:00:01.043) 0:04:04.040 ***** 2026-01-07 01:17:11.379885 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.379889 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.379892 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.379896 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:11.379900 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:11.379903 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:11.379907 | orchestrator | 2026-01-07 01:17:11.379914 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-07 01:17:11.379918 | orchestrator | Wednesday 07 January 2026 01:13:04 +0000 (0:00:01.495) 0:04:05.536 ***** 2026-01-07 01:17:11.379926 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:11.379936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:11.379941 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:11.379945 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:11.379950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:11.379958 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:11.379962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:11.379969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:11.379973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.379978 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:11.379982 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.379990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.379996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380008 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380014 | orchestrator | 2026-01-07 01:17:11.380020 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-07 01:17:11.380029 | orchestrator | Wednesday 07 January 2026 01:13:06 +0000 (0:00:02.261) 0:04:07.797 ***** 2026-01-07 01:17:11.380037 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:11.380044 | orchestrator | 2026-01-07 01:17:11.380050 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-07 01:17:11.380056 | orchestrator | Wednesday 07 January 2026 01:13:07 +0000 (0:00:01.208) 0:04:09.006 ***** 2026-01-07 01:17:11.380062 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380076 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380086 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380119 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380136 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380152 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380156 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380173 | orchestrator | 2026-01-07 01:17:11.380177 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-07 01:17:11.380181 | orchestrator | Wednesday 07 January 2026 01:13:10 +0000 (0:00:03.359) 0:04:12.365 ***** 2026-01-07 01:17:11.380187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:11.380191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:11.380195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.380201 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.380205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:11.380211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:11.380219 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.380223 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.380228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:11.380235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:11.380240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.380244 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.380250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:11.380255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.380260 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.380267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:11.380272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.380279 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.380283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:11.380290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.380296 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.380306 | orchestrator | 2026-01-07 01:17:11.380313 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-07 01:17:11.380319 | orchestrator | Wednesday 07 January 2026 01:13:12 +0000 (0:00:01.623) 0:04:13.988 ***** 2026-01-07 01:17:11.380329 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:11.380339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:11.380344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.380351 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.380356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:11.380361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:11.380365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.380369 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.380376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:11.380383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:11.380388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.380395 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.380399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:11.380404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.380408 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.380413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:11.380419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.380424 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.380432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:11.380439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.380444 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.380448 | orchestrator | 2026-01-07 01:17:11.380452 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-07 01:17:11.380457 | orchestrator | Wednesday 07 January 2026 01:13:14 +0000 (0:00:02.232) 0:04:16.221 ***** 2026-01-07 01:17:11.380461 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.380466 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.380470 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.380474 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-07 01:17:11.380479 | orchestrator | 2026-01-07 01:17:11.380483 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-01-07 01:17:11.380488 | orchestrator | Wednesday 07 January 2026 01:13:15 +0000 (0:00:01.055) 0:04:17.277 ***** 2026-01-07 01:17:11.380492 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 01:17:11.380497 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-07 01:17:11.380501 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-07 01:17:11.380505 | orchestrator | 2026-01-07 01:17:11.380509 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-01-07 01:17:11.380514 | orchestrator | Wednesday 07 January 2026 01:13:16 +0000 (0:00:00.887) 0:04:18.164 ***** 2026-01-07 01:17:11.380518 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 01:17:11.380523 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-07 01:17:11.380527 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-07 01:17:11.380531 | orchestrator | 2026-01-07 01:17:11.380535 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-01-07 01:17:11.380540 | orchestrator | Wednesday 07 January 2026 01:13:17 +0000 (0:00:00.988) 0:04:19.153 ***** 2026-01-07 01:17:11.380544 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:17:11.380548 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:17:11.380552 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:17:11.380557 | orchestrator | 2026-01-07 01:17:11.380561 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-01-07 01:17:11.380566 | orchestrator | Wednesday 07 January 2026 01:13:18 +0000 (0:00:00.515) 0:04:19.668 ***** 2026-01-07 01:17:11.380570 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:17:11.380574 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:17:11.380579 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:17:11.380583 | orchestrator | 2026-01-07 01:17:11.380587 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-01-07 01:17:11.380591 | orchestrator | Wednesday 07 January 2026 01:13:19 +0000 (0:00:00.801) 0:04:20.470 ***** 2026-01-07 01:17:11.380596 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-07 01:17:11.380600 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-07 01:17:11.380605 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-07 01:17:11.380610 | orchestrator | 2026-01-07 01:17:11.380614 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-01-07 01:17:11.380618 | orchestrator | Wednesday 07 January 2026 01:13:20 +0000 (0:00:01.115) 0:04:21.585 ***** 2026-01-07 01:17:11.380623 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-07 01:17:11.380627 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-07 01:17:11.380635 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-07 01:17:11.380639 | orchestrator | 2026-01-07 01:17:11.380644 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-01-07 01:17:11.380648 | orchestrator | Wednesday 07 January 2026 01:13:21 +0000 (0:00:01.067) 0:04:22.652 ***** 2026-01-07 01:17:11.380652 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-07 01:17:11.380656 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-07 01:17:11.380659 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-07 01:17:11.380663 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-01-07 01:17:11.380667 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-01-07 01:17:11.380670 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-01-07 01:17:11.380674 | orchestrator | 2026-01-07 01:17:11.380678 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-01-07 01:17:11.380681 | orchestrator | Wednesday 07 January 2026 01:13:24 +0000 (0:00:03.725) 0:04:26.378 ***** 2026-01-07 01:17:11.380685 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.380689 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.380693 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.380697 | orchestrator | 2026-01-07 01:17:11.380702 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-01-07 01:17:11.380706 | orchestrator | Wednesday 07 January 2026 01:13:25 +0000 (0:00:00.595) 0:04:26.974 ***** 2026-01-07 01:17:11.380710 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.380738 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.380742 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.380746 | orchestrator | 2026-01-07 01:17:11.380749 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-01-07 01:17:11.380753 | orchestrator | Wednesday 07 January 2026 01:13:25 +0000 (0:00:00.323) 0:04:27.298 ***** 2026-01-07 01:17:11.380757 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:11.380761 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:11.380765 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:11.380768 | orchestrator | 2026-01-07 01:17:11.380772 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-01-07 01:17:11.380776 | orchestrator | Wednesday 07 January 2026 01:13:27 +0000 (0:00:01.206) 0:04:28.504 ***** 2026-01-07 01:17:11.380780 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-07 01:17:11.380784 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-07 01:17:11.380788 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-07 01:17:11.380792 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-07 01:17:11.380795 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-07 01:17:11.380799 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-07 01:17:11.380803 | orchestrator | 2026-01-07 01:17:11.380807 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-01-07 01:17:11.380810 | orchestrator | Wednesday 07 January 2026 01:13:30 +0000 (0:00:03.293) 0:04:31.797 ***** 2026-01-07 01:17:11.380814 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 01:17:11.380818 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 01:17:11.380822 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 01:17:11.380825 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-07 01:17:11.380832 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:11.380835 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-07 01:17:11.380839 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:11.380843 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-07 01:17:11.380846 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:11.380850 | orchestrator | 2026-01-07 01:17:11.380854 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-01-07 01:17:11.380858 | orchestrator | Wednesday 07 January 2026 01:13:33 +0000 (0:00:03.245) 0:04:35.043 ***** 2026-01-07 01:17:11.380861 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.380865 | orchestrator | 2026-01-07 01:17:11.380869 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-01-07 01:17:11.380872 | orchestrator | Wednesday 07 January 2026 01:13:33 +0000 (0:00:00.140) 0:04:35.184 ***** 2026-01-07 01:17:11.380876 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.380880 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.380884 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.380887 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.380891 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.380895 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.380898 | orchestrator | 2026-01-07 01:17:11.380902 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-01-07 01:17:11.380906 | orchestrator | Wednesday 07 January 2026 01:13:34 +0000 (0:00:00.598) 0:04:35.782 ***** 2026-01-07 01:17:11.380909 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-07 01:17:11.380913 | orchestrator | 2026-01-07 01:17:11.380917 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-01-07 01:17:11.380921 | orchestrator | Wednesday 07 January 2026 01:13:35 +0000 (0:00:00.662) 0:04:36.445 ***** 2026-01-07 01:17:11.380924 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.380928 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.380932 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.380935 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.380939 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.380943 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.380946 | orchestrator | 2026-01-07 01:17:11.380952 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-01-07 01:17:11.380956 | orchestrator | Wednesday 07 January 2026 01:13:35 +0000 (0:00:00.798) 0:04:37.244 ***** 2026-01-07 01:17:11.380963 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380967 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380974 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:11.380996 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381000 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381006 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381026 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381030 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381037 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381041 | orchestrator | 2026-01-07 01:17:11.381044 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-01-07 01:17:11.381048 | orchestrator | Wednesday 07 January 2026 01:13:39 +0000 (0:00:04.109) 0:04:41.353 ***** 2026-01-07 01:17:11.381052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:11.381056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:11.381142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:11.381153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:11.381165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:11.381171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:11.381177 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381194 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.381270 | orchestrator | 2026-01-07 01:17:11.381280 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-01-07 01:17:11.381287 | orchestrator | Wednesday 07 January 2026 01:13:46 +0000 (0:00:06.450) 0:04:47.804 ***** 2026-01-07 01:17:11.381293 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.381300 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.381310 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.381316 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.381322 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.381329 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.381335 | orchestrator | 2026-01-07 01:17:11.381341 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-01-07 01:17:11.381348 | orchestrator | Wednesday 07 January 2026 01:13:47 +0000 (0:00:01.386) 0:04:49.191 ***** 2026-01-07 01:17:11.381352 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-07 01:17:11.381356 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-07 01:17:11.381360 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-07 01:17:11.381363 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-07 01:17:11.381367 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.381371 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-07 01:17:11.381375 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.381379 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-07 01:17:11.381383 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-07 01:17:11.381386 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.381390 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-07 01:17:11.381394 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-07 01:17:11.381397 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-07 01:17:11.381401 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-07 01:17:11.381405 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-07 01:17:11.381409 | orchestrator | 2026-01-07 01:17:11.381412 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-01-07 01:17:11.381416 | orchestrator | Wednesday 07 January 2026 01:13:52 +0000 (0:00:04.340) 0:04:53.531 ***** 2026-01-07 01:17:11.381420 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.381423 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.381427 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.381431 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.381434 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.381438 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.381442 | orchestrator | 2026-01-07 01:17:11.381446 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-01-07 01:17:11.381449 | orchestrator | Wednesday 07 January 2026 01:13:52 +0000 (0:00:00.618) 0:04:54.150 ***** 2026-01-07 01:17:11.381453 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-07 01:17:11.381457 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-07 01:17:11.381461 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-07 01:17:11.381464 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-07 01:17:11.381475 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-07 01:17:11.381479 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-07 01:17:11.381482 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-07 01:17:11.381488 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-07 01:17:11.381492 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-07 01:17:11.381496 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-07 01:17:11.381500 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.381504 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-07 01:17:11.381507 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.381511 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-07 01:17:11.381515 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.381519 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:17:11.381525 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:17:11.381529 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:17:11.381533 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:17:11.381537 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:17:11.381540 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-07 01:17:11.381544 | orchestrator | 2026-01-07 01:17:11.381548 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-01-07 01:17:11.381552 | orchestrator | Wednesday 07 January 2026 01:13:58 +0000 (0:00:05.515) 0:04:59.665 ***** 2026-01-07 01:17:11.381555 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 01:17:11.381559 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 01:17:11.381563 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-07 01:17:11.381566 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-07 01:17:11.381570 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-07 01:17:11.381574 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 01:17:11.381578 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-07 01:17:11.381581 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 01:17:11.381585 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-07 01:17:11.381589 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 01:17:11.381592 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 01:17:11.381596 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-07 01:17:11.381600 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.381606 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-07 01:17:11.381609 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-07 01:17:11.381613 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.381617 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-07 01:17:11.381621 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.381624 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 01:17:11.381641 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 01:17:11.381644 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-07 01:17:11.381648 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 01:17:11.381652 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 01:17:11.381655 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-07 01:17:11.381659 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 01:17:11.381665 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 01:17:11.381672 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-07 01:17:11.381678 | orchestrator | 2026-01-07 01:17:11.381685 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-01-07 01:17:11.381691 | orchestrator | Wednesday 07 January 2026 01:14:05 +0000 (0:00:07.107) 0:05:06.773 ***** 2026-01-07 01:17:11.381697 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.381702 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.381708 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.381728 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.381734 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.381739 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.381745 | orchestrator | 2026-01-07 01:17:11.381750 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-01-07 01:17:11.381756 | orchestrator | Wednesday 07 January 2026 01:14:06 +0000 (0:00:00.824) 0:05:07.597 ***** 2026-01-07 01:17:11.381762 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.381767 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.381774 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.381780 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.381787 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.381795 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.381802 | orchestrator | 2026-01-07 01:17:11.381809 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-01-07 01:17:11.381816 | orchestrator | Wednesday 07 January 2026 01:14:06 +0000 (0:00:00.590) 0:05:08.187 ***** 2026-01-07 01:17:11.381822 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.381829 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.381835 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.381847 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:11.381855 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:11.381861 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:11.381868 | orchestrator | 2026-01-07 01:17:11.381875 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-01-07 01:17:11.381882 | orchestrator | Wednesday 07 January 2026 01:14:09 +0000 (0:00:02.323) 0:05:10.511 ***** 2026-01-07 01:17:11.381891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:11.381905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:11.381914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.381921 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.381932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:11.381941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:11.381953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.381966 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.381973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:11.381980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.381986 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.381993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-07 01:17:11.382003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-07 01:17:11.382051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.382066 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.382073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:11.382081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.382088 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.382095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-07 01:17:11.382102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-07 01:17:11.382109 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.382116 | orchestrator | 2026-01-07 01:17:11.382120 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-01-07 01:17:11.382127 | orchestrator | Wednesday 07 January 2026 01:14:10 +0000 (0:00:01.374) 0:05:11.886 ***** 2026-01-07 01:17:11.382134 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-07 01:17:11.382140 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-07 01:17:11.382146 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.382152 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-07 01:17:11.382159 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-07 01:17:11.382165 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.382175 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-07 01:17:11.382182 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-07 01:17:11.382190 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.382194 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-07 01:17:11.382198 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-07 01:17:11.382201 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.382208 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-07 01:17:11.382212 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-07 01:17:11.382216 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.382219 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-07 01:17:11.382223 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-07 01:17:11.382227 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.382231 | orchestrator | 2026-01-07 01:17:11.382234 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-01-07 01:17:11.382242 | orchestrator | Wednesday 07 January 2026 01:14:11 +0000 (0:00:00.870) 0:05:12.756 ***** 2026-01-07 01:17:11.382246 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:11.382251 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:11.382255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-07 01:17:11.382259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:11.382267 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:11.382274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:11.382278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-07 01:17:11.382285 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:11.382291 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-07 01:17:11.382298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.382307 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.382320 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.382327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.382333 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.382340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:11.382346 | orchestrator | 2026-01-07 01:17:11.382353 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-07 01:17:11.382360 | orchestrator | Wednesday 07 January 2026 01:14:14 +0000 (0:00:02.938) 0:05:15.695 ***** 2026-01-07 01:17:11.382366 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.382373 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.382380 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.382386 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.382391 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.382395 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.382403 | orchestrator | 2026-01-07 01:17:11.382407 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:17:11.382411 | orchestrator | Wednesday 07 January 2026 01:14:15 +0000 (0:00:00.775) 0:05:16.470 ***** 2026-01-07 01:17:11.382414 | orchestrator | 2026-01-07 01:17:11.382418 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:17:11.382422 | orchestrator | Wednesday 07 January 2026 01:14:15 +0000 (0:00:00.146) 0:05:16.617 ***** 2026-01-07 01:17:11.382426 | orchestrator | 2026-01-07 01:17:11.382429 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:17:11.382433 | orchestrator | Wednesday 07 January 2026 01:14:15 +0000 (0:00:00.129) 0:05:16.746 ***** 2026-01-07 01:17:11.382437 | orchestrator | 2026-01-07 01:17:11.382440 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:17:11.382446 | orchestrator | Wednesday 07 January 2026 01:14:15 +0000 (0:00:00.145) 0:05:16.892 ***** 2026-01-07 01:17:11.382450 | orchestrator | 2026-01-07 01:17:11.382454 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:17:11.382457 | orchestrator | Wednesday 07 January 2026 01:14:15 +0000 (0:00:00.143) 0:05:17.035 ***** 2026-01-07 01:17:11.382461 | orchestrator | 2026-01-07 01:17:11.382465 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-07 01:17:11.382468 | orchestrator | Wednesday 07 January 2026 01:14:15 +0000 (0:00:00.126) 0:05:17.162 ***** 2026-01-07 01:17:11.382472 | orchestrator | 2026-01-07 01:17:11.382476 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-01-07 01:17:11.382479 | orchestrator | Wednesday 07 January 2026 01:14:16 +0000 (0:00:00.302) 0:05:17.464 ***** 2026-01-07 01:17:11.382483 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.382487 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:11.382490 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:11.382494 | orchestrator | 2026-01-07 01:17:11.382498 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-01-07 01:17:11.382501 | orchestrator | Wednesday 07 January 2026 01:14:28 +0000 (0:00:12.241) 0:05:29.706 ***** 2026-01-07 01:17:11.382507 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.382511 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:11.382515 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:11.382519 | orchestrator | 2026-01-07 01:17:11.382523 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-01-07 01:17:11.382526 | orchestrator | Wednesday 07 January 2026 01:14:40 +0000 (0:00:12.679) 0:05:42.385 ***** 2026-01-07 01:17:11.382530 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:11.382534 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:11.382537 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:11.382541 | orchestrator | 2026-01-07 01:17:11.382545 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-01-07 01:17:11.382548 | orchestrator | Wednesday 07 January 2026 01:15:00 +0000 (0:00:19.845) 0:06:02.231 ***** 2026-01-07 01:17:11.382552 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:11.382556 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:11.382559 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:11.382563 | orchestrator | 2026-01-07 01:17:11.382567 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-01-07 01:17:11.382570 | orchestrator | Wednesday 07 January 2026 01:15:30 +0000 (0:00:29.621) 0:06:31.852 ***** 2026-01-07 01:17:11.382574 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:11.382578 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:11.382581 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:11.382585 | orchestrator | 2026-01-07 01:17:11.382589 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-01-07 01:17:11.382592 | orchestrator | Wednesday 07 January 2026 01:15:31 +0000 (0:00:00.816) 0:06:32.669 ***** 2026-01-07 01:17:11.382596 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:11.382600 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:11.382606 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:11.382609 | orchestrator | 2026-01-07 01:17:11.382613 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-01-07 01:17:11.382617 | orchestrator | Wednesday 07 January 2026 01:15:32 +0000 (0:00:00.793) 0:06:33.463 ***** 2026-01-07 01:17:11.382621 | orchestrator | changed: [testbed-node-4] 2026-01-07 01:17:11.382624 | orchestrator | changed: [testbed-node-5] 2026-01-07 01:17:11.382628 | orchestrator | changed: [testbed-node-3] 2026-01-07 01:17:11.382632 | orchestrator | 2026-01-07 01:17:11.382635 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-01-07 01:17:11.382639 | orchestrator | Wednesday 07 January 2026 01:15:56 +0000 (0:00:24.023) 0:06:57.486 ***** 2026-01-07 01:17:11.382643 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.382646 | orchestrator | 2026-01-07 01:17:11.382650 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-01-07 01:17:11.382654 | orchestrator | Wednesday 07 January 2026 01:15:56 +0000 (0:00:00.137) 0:06:57.623 ***** 2026-01-07 01:17:11.382658 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.382661 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.382665 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.382669 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.382672 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.382676 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-01-07 01:17:11.382681 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 01:17:11.382684 | orchestrator | 2026-01-07 01:17:11.382688 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-01-07 01:17:11.382692 | orchestrator | Wednesday 07 January 2026 01:16:18 +0000 (0:00:22.112) 0:07:19.735 ***** 2026-01-07 01:17:11.382695 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.382699 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.382703 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.382706 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.382710 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.382743 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.382747 | orchestrator | 2026-01-07 01:17:11.382751 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-01-07 01:17:11.382755 | orchestrator | Wednesday 07 January 2026 01:16:27 +0000 (0:00:08.900) 0:07:28.635 ***** 2026-01-07 01:17:11.382759 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.382762 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.382766 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.382770 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.382773 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.382777 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-01-07 01:17:11.382781 | orchestrator | 2026-01-07 01:17:11.382785 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-07 01:17:11.382788 | orchestrator | Wednesday 07 January 2026 01:16:31 +0000 (0:00:03.774) 0:07:32.410 ***** 2026-01-07 01:17:11.382792 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 01:17:11.382796 | orchestrator | 2026-01-07 01:17:11.382802 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-07 01:17:11.382805 | orchestrator | Wednesday 07 January 2026 01:16:45 +0000 (0:00:14.419) 0:07:46.829 ***** 2026-01-07 01:17:11.382809 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 01:17:11.382813 | orchestrator | 2026-01-07 01:17:11.382817 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-01-07 01:17:11.382820 | orchestrator | Wednesday 07 January 2026 01:16:46 +0000 (0:00:01.364) 0:07:48.194 ***** 2026-01-07 01:17:11.382824 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.382828 | orchestrator | 2026-01-07 01:17:11.382834 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-01-07 01:17:11.382838 | orchestrator | Wednesday 07 January 2026 01:16:48 +0000 (0:00:01.295) 0:07:49.489 ***** 2026-01-07 01:17:11.382842 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-07 01:17:11.382845 | orchestrator | 2026-01-07 01:17:11.382849 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-01-07 01:17:11.382855 | orchestrator | Wednesday 07 January 2026 01:17:00 +0000 (0:00:12.896) 0:08:02.385 ***** 2026-01-07 01:17:11.382859 | orchestrator | ok: [testbed-node-3] 2026-01-07 01:17:11.382863 | orchestrator | ok: [testbed-node-4] 2026-01-07 01:17:11.382867 | orchestrator | ok: [testbed-node-5] 2026-01-07 01:17:11.382871 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:11.382874 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:17:11.382878 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:17:11.382882 | orchestrator | 2026-01-07 01:17:11.382885 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-01-07 01:17:11.382889 | orchestrator | 2026-01-07 01:17:11.382893 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-01-07 01:17:11.382897 | orchestrator | Wednesday 07 January 2026 01:17:02 +0000 (0:00:01.942) 0:08:04.328 ***** 2026-01-07 01:17:11.382900 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:11.382904 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:11.382908 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:11.382911 | orchestrator | 2026-01-07 01:17:11.382915 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-01-07 01:17:11.382919 | orchestrator | 2026-01-07 01:17:11.382923 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-01-07 01:17:11.382926 | orchestrator | Wednesday 07 January 2026 01:17:04 +0000 (0:00:01.203) 0:08:05.531 ***** 2026-01-07 01:17:11.382930 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.382934 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.382937 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.382941 | orchestrator | 2026-01-07 01:17:11.382945 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-01-07 01:17:11.382949 | orchestrator | 2026-01-07 01:17:11.382952 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-01-07 01:17:11.382956 | orchestrator | Wednesday 07 January 2026 01:17:04 +0000 (0:00:00.532) 0:08:06.063 ***** 2026-01-07 01:17:11.382960 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-01-07 01:17:11.382963 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-07 01:17:11.382967 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-07 01:17:11.382971 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-01-07 01:17:11.382975 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-01-07 01:17:11.382978 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-01-07 01:17:11.382982 | orchestrator | skipping: [testbed-node-3] 2026-01-07 01:17:11.382986 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-01-07 01:17:11.382989 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-07 01:17:11.382993 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-07 01:17:11.382997 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-01-07 01:17:11.383000 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-01-07 01:17:11.383004 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-01-07 01:17:11.383008 | orchestrator | skipping: [testbed-node-4] 2026-01-07 01:17:11.383012 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-01-07 01:17:11.383018 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-07 01:17:11.383025 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-07 01:17:11.383032 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-01-07 01:17:11.383042 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-01-07 01:17:11.383048 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-01-07 01:17:11.383054 | orchestrator | skipping: [testbed-node-5] 2026-01-07 01:17:11.383061 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-01-07 01:17:11.383067 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-07 01:17:11.383073 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-07 01:17:11.383080 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-01-07 01:17:11.383087 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-01-07 01:17:11.383094 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-01-07 01:17:11.383100 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.383107 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-01-07 01:17:11.383114 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-07 01:17:11.383120 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-07 01:17:11.383127 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-01-07 01:17:11.383134 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-01-07 01:17:11.383141 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-01-07 01:17:11.383151 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.383159 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-01-07 01:17:11.383166 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-07 01:17:11.383174 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-07 01:17:11.383181 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-01-07 01:17:11.383188 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-01-07 01:17:11.383194 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-01-07 01:17:11.383201 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.383208 | orchestrator | 2026-01-07 01:17:11.383215 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-01-07 01:17:11.383221 | orchestrator | 2026-01-07 01:17:11.383228 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-01-07 01:17:11.383235 | orchestrator | Wednesday 07 January 2026 01:17:06 +0000 (0:00:01.373) 0:08:07.436 ***** 2026-01-07 01:17:11.383242 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-01-07 01:17:11.383253 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-01-07 01:17:11.383260 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.383266 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-01-07 01:17:11.383272 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-01-07 01:17:11.383282 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.383288 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-01-07 01:17:11.383294 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-01-07 01:17:11.383300 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.383306 | orchestrator | 2026-01-07 01:17:11.383313 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-01-07 01:17:11.383319 | orchestrator | 2026-01-07 01:17:11.383326 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-01-07 01:17:11.383333 | orchestrator | Wednesday 07 January 2026 01:17:06 +0000 (0:00:00.829) 0:08:08.266 ***** 2026-01-07 01:17:11.383339 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.383345 | orchestrator | 2026-01-07 01:17:11.383351 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-01-07 01:17:11.383356 | orchestrator | 2026-01-07 01:17:11.383360 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-01-07 01:17:11.383369 | orchestrator | Wednesday 07 January 2026 01:17:07 +0000 (0:00:00.687) 0:08:08.953 ***** 2026-01-07 01:17:11.383373 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:11.383377 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:11.383381 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:11.383384 | orchestrator | 2026-01-07 01:17:11.383388 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:17:11.383392 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:17:11.383397 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-01-07 01:17:11.383401 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-07 01:17:11.383405 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-07 01:17:11.383408 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-07 01:17:11.383412 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-07 01:17:11.383416 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-07 01:17:11.383420 | orchestrator | 2026-01-07 01:17:11.383423 | orchestrator | 2026-01-07 01:17:11.383427 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:17:11.383431 | orchestrator | Wednesday 07 January 2026 01:17:07 +0000 (0:00:00.429) 0:08:09.383 ***** 2026-01-07 01:17:11.383434 | orchestrator | =============================================================================== 2026-01-07 01:17:11.383438 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 29.62s 2026-01-07 01:17:11.383442 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 28.75s 2026-01-07 01:17:11.383446 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 26.81s 2026-01-07 01:17:11.383449 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 24.02s 2026-01-07 01:17:11.383453 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.11s 2026-01-07 01:17:11.383457 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.85s 2026-01-07 01:17:11.383460 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.07s 2026-01-07 01:17:11.383464 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.42s 2026-01-07 01:17:11.383468 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.19s 2026-01-07 01:17:11.383471 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.97s 2026-01-07 01:17:11.383478 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.78s 2026-01-07 01:17:11.383482 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 12.97s 2026-01-07 01:17:11.383485 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.90s 2026-01-07 01:17:11.383489 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 12.68s 2026-01-07 01:17:11.383493 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.24s 2026-01-07 01:17:11.383496 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.87s 2026-01-07 01:17:11.383500 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.10s 2026-01-07 01:17:11.383504 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.90s 2026-01-07 01:17:11.383510 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.02s 2026-01-07 01:17:11.383514 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.11s 2026-01-07 01:17:11.383521 | orchestrator | 2026-01-07 01:17:11 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:17:11.383525 | orchestrator | 2026-01-07 01:17:11 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:14.425002 | orchestrator | 2026-01-07 01:17:14 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:17:14.425054 | orchestrator | 2026-01-07 01:17:14 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:17.468537 | orchestrator | 2026-01-07 01:17:17 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:17:17.468598 | orchestrator | 2026-01-07 01:17:17 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:20.513098 | orchestrator | 2026-01-07 01:17:20 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:17:20.513173 | orchestrator | 2026-01-07 01:17:20 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:23.560263 | orchestrator | 2026-01-07 01:17:23 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:17:23.560322 | orchestrator | 2026-01-07 01:17:23 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:26.606006 | orchestrator | 2026-01-07 01:17:26 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:17:26.606090 | orchestrator | 2026-01-07 01:17:26 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:29.653101 | orchestrator | 2026-01-07 01:17:29 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:17:29.653153 | orchestrator | 2026-01-07 01:17:29 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:32.704919 | orchestrator | 2026-01-07 01:17:32 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:17:32.704971 | orchestrator | 2026-01-07 01:17:32 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:35.753044 | orchestrator | 2026-01-07 01:17:35 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:17:35.753118 | orchestrator | 2026-01-07 01:17:35 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:38.803509 | orchestrator | 2026-01-07 01:17:38 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:17:38.803597 | orchestrator | 2026-01-07 01:17:38 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:41.860416 | orchestrator | 2026-01-07 01:17:41 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:17:41.860469 | orchestrator | 2026-01-07 01:17:41 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:44.910189 | orchestrator | 2026-01-07 01:17:44 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state STARTED 2026-01-07 01:17:44.910244 | orchestrator | 2026-01-07 01:17:44 | INFO  | Wait 1 second(s) until the next check 2026-01-07 01:17:47.957702 | orchestrator | 2026-01-07 01:17:47 | INFO  | Task 183d7cb7-bb5a-4864-969c-31b667d379b9 is in state SUCCESS 2026-01-07 01:17:47.959067 | orchestrator | 2026-01-07 01:17:47.959116 | orchestrator | 2026-01-07 01:17:47.960018 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-07 01:17:47.960062 | orchestrator | 2026-01-07 01:17:47.960082 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-07 01:17:47.960102 | orchestrator | Wednesday 07 January 2026 01:13:00 +0000 (0:00:00.279) 0:00:00.279 ***** 2026-01-07 01:17:47.960151 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:47.960173 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:17:47.960192 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:17:47.960211 | orchestrator | 2026-01-07 01:17:47.960232 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-07 01:17:47.960251 | orchestrator | Wednesday 07 January 2026 01:13:00 +0000 (0:00:00.286) 0:00:00.565 ***** 2026-01-07 01:17:47.960269 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-01-07 01:17:47.960303 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-01-07 01:17:47.960325 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-01-07 01:17:47.960345 | orchestrator | 2026-01-07 01:17:47.960393 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-01-07 01:17:47.960413 | orchestrator | 2026-01-07 01:17:47.960433 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:17:47.960454 | orchestrator | Wednesday 07 January 2026 01:13:01 +0000 (0:00:00.422) 0:00:00.988 ***** 2026-01-07 01:17:47.960475 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:47.960496 | orchestrator | 2026-01-07 01:17:47.960515 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-01-07 01:17:47.960535 | orchestrator | Wednesday 07 January 2026 01:13:01 +0000 (0:00:00.553) 0:00:01.541 ***** 2026-01-07 01:17:47.960556 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-01-07 01:17:47.960577 | orchestrator | 2026-01-07 01:17:47.960597 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-01-07 01:17:47.960619 | orchestrator | Wednesday 07 January 2026 01:13:04 +0000 (0:00:03.194) 0:00:04.736 ***** 2026-01-07 01:17:47.960641 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-01-07 01:17:47.960663 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-01-07 01:17:47.960685 | orchestrator | 2026-01-07 01:17:47.960705 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-01-07 01:17:47.960725 | orchestrator | Wednesday 07 January 2026 01:13:10 +0000 (0:00:06.007) 0:00:10.743 ***** 2026-01-07 01:17:47.960745 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-07 01:17:47.960765 | orchestrator | 2026-01-07 01:17:47.960814 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-01-07 01:17:47.960833 | orchestrator | Wednesday 07 January 2026 01:13:13 +0000 (0:00:02.762) 0:00:13.505 ***** 2026-01-07 01:17:47.960852 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-07 01:17:47.960872 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-07 01:17:47.960891 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-07 01:17:47.960910 | orchestrator | 2026-01-07 01:17:47.960928 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-01-07 01:17:47.960946 | orchestrator | Wednesday 07 January 2026 01:13:21 +0000 (0:00:07.364) 0:00:20.870 ***** 2026-01-07 01:17:47.960965 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-07 01:17:47.960984 | orchestrator | 2026-01-07 01:17:47.961002 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-01-07 01:17:47.961020 | orchestrator | Wednesday 07 January 2026 01:13:24 +0000 (0:00:03.257) 0:00:24.127 ***** 2026-01-07 01:17:47.961032 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-07 01:17:47.961052 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-07 01:17:47.961070 | orchestrator | 2026-01-07 01:17:47.961088 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-01-07 01:17:47.961107 | orchestrator | Wednesday 07 January 2026 01:13:31 +0000 (0:00:07.138) 0:00:31.265 ***** 2026-01-07 01:17:47.961123 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-01-07 01:17:47.961157 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-01-07 01:17:47.961177 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-01-07 01:17:47.961196 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-01-07 01:17:47.961215 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-01-07 01:17:47.961233 | orchestrator | 2026-01-07 01:17:47.961251 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:17:47.961269 | orchestrator | Wednesday 07 January 2026 01:13:47 +0000 (0:00:16.376) 0:00:47.641 ***** 2026-01-07 01:17:47.961287 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:47.961305 | orchestrator | 2026-01-07 01:17:47.961323 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-01-07 01:17:47.961341 | orchestrator | Wednesday 07 January 2026 01:13:48 +0000 (0:00:00.789) 0:00:48.431 ***** 2026-01-07 01:17:47.961359 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.961377 | orchestrator | 2026-01-07 01:17:47.961395 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-01-07 01:17:47.961413 | orchestrator | Wednesday 07 January 2026 01:13:55 +0000 (0:00:06.513) 0:00:54.944 ***** 2026-01-07 01:17:47.961431 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.961449 | orchestrator | 2026-01-07 01:17:47.961469 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-07 01:17:47.961558 | orchestrator | Wednesday 07 January 2026 01:14:00 +0000 (0:00:05.523) 0:01:00.468 ***** 2026-01-07 01:17:47.961578 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:47.961597 | orchestrator | 2026-01-07 01:17:47.961615 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-01-07 01:17:47.961632 | orchestrator | Wednesday 07 January 2026 01:14:04 +0000 (0:00:03.638) 0:01:04.107 ***** 2026-01-07 01:17:47.961650 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-07 01:17:47.961668 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-07 01:17:47.961686 | orchestrator | 2026-01-07 01:17:47.961705 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-01-07 01:17:47.961724 | orchestrator | Wednesday 07 January 2026 01:14:14 +0000 (0:00:10.230) 0:01:14.337 ***** 2026-01-07 01:17:47.961753 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-01-07 01:17:47.961945 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-01-07 01:17:47.961980 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-01-07 01:17:47.961992 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-01-07 01:17:47.962003 | orchestrator | 2026-01-07 01:17:47.962064 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-01-07 01:17:47.962080 | orchestrator | Wednesday 07 January 2026 01:14:30 +0000 (0:00:16.002) 0:01:30.339 ***** 2026-01-07 01:17:47.962090 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.962101 | orchestrator | 2026-01-07 01:17:47.962112 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-01-07 01:17:47.962123 | orchestrator | Wednesday 07 January 2026 01:14:36 +0000 (0:00:05.570) 0:01:35.910 ***** 2026-01-07 01:17:47.962134 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.962145 | orchestrator | 2026-01-07 01:17:47.962155 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-01-07 01:17:47.962166 | orchestrator | Wednesday 07 January 2026 01:14:41 +0000 (0:00:05.072) 0:01:40.982 ***** 2026-01-07 01:17:47.962177 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:47.962201 | orchestrator | 2026-01-07 01:17:47.962213 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-01-07 01:17:47.962223 | orchestrator | Wednesday 07 January 2026 01:14:41 +0000 (0:00:00.222) 0:01:41.204 ***** 2026-01-07 01:17:47.962234 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:47.962245 | orchestrator | 2026-01-07 01:17:47.962255 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:17:47.962266 | orchestrator | Wednesday 07 January 2026 01:14:45 +0000 (0:00:03.792) 0:01:44.997 ***** 2026-01-07 01:17:47.962277 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:47.962288 | orchestrator | 2026-01-07 01:17:47.962298 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-01-07 01:17:47.962309 | orchestrator | Wednesday 07 January 2026 01:14:46 +0000 (0:00:01.149) 0:01:46.146 ***** 2026-01-07 01:17:47.962319 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.962329 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:47.962338 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:47.962360 | orchestrator | 2026-01-07 01:17:47.962379 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-01-07 01:17:47.962389 | orchestrator | Wednesday 07 January 2026 01:14:50 +0000 (0:00:04.484) 0:01:50.631 ***** 2026-01-07 01:17:47.962398 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.962408 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:47.962418 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:47.962427 | orchestrator | 2026-01-07 01:17:47.962437 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-01-07 01:17:47.962446 | orchestrator | Wednesday 07 January 2026 01:14:54 +0000 (0:00:04.037) 0:01:54.668 ***** 2026-01-07 01:17:47.962456 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.962465 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:47.962475 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:47.962484 | orchestrator | 2026-01-07 01:17:47.962494 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-01-07 01:17:47.962504 | orchestrator | Wednesday 07 January 2026 01:14:55 +0000 (0:00:00.804) 0:01:55.473 ***** 2026-01-07 01:17:47.962513 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:47.962545 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:17:47.962555 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:17:47.962564 | orchestrator | 2026-01-07 01:17:47.962574 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-01-07 01:17:47.962584 | orchestrator | Wednesday 07 January 2026 01:14:57 +0000 (0:00:01.962) 0:01:57.435 ***** 2026-01-07 01:17:47.962593 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:47.962603 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.962612 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:47.962621 | orchestrator | 2026-01-07 01:17:47.962631 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-01-07 01:17:47.962641 | orchestrator | Wednesday 07 January 2026 01:14:58 +0000 (0:00:01.313) 0:01:58.748 ***** 2026-01-07 01:17:47.962650 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.962660 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:47.962669 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:47.962679 | orchestrator | 2026-01-07 01:17:47.962688 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-01-07 01:17:47.962698 | orchestrator | Wednesday 07 January 2026 01:15:00 +0000 (0:00:01.100) 0:01:59.849 ***** 2026-01-07 01:17:47.962707 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.962717 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:47.962726 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:47.962736 | orchestrator | 2026-01-07 01:17:47.962816 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-01-07 01:17:47.962828 | orchestrator | Wednesday 07 January 2026 01:15:01 +0000 (0:00:01.737) 0:02:01.587 ***** 2026-01-07 01:17:47.962845 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.962855 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:47.962865 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:47.962874 | orchestrator | 2026-01-07 01:17:47.962884 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-01-07 01:17:47.962893 | orchestrator | Wednesday 07 January 2026 01:15:03 +0000 (0:00:01.865) 0:02:03.453 ***** 2026-01-07 01:17:47.962903 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:47.962913 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:17:47.962922 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:17:47.962932 | orchestrator | 2026-01-07 01:17:47.962941 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-01-07 01:17:47.962957 | orchestrator | Wednesday 07 January 2026 01:15:04 +0000 (0:00:00.658) 0:02:04.112 ***** 2026-01-07 01:17:47.962967 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:17:47.962977 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:17:47.962986 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:47.962995 | orchestrator | 2026-01-07 01:17:47.963005 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:17:47.963014 | orchestrator | Wednesday 07 January 2026 01:15:07 +0000 (0:00:03.362) 0:02:07.474 ***** 2026-01-07 01:17:47.963024 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:47.963033 | orchestrator | 2026-01-07 01:17:47.963043 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-01-07 01:17:47.963052 | orchestrator | Wednesday 07 January 2026 01:15:08 +0000 (0:00:00.837) 0:02:08.311 ***** 2026-01-07 01:17:47.963062 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:47.963071 | orchestrator | 2026-01-07 01:17:47.963080 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-07 01:17:47.963090 | orchestrator | Wednesday 07 January 2026 01:15:12 +0000 (0:00:03.940) 0:02:12.252 ***** 2026-01-07 01:17:47.963099 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:47.963109 | orchestrator | 2026-01-07 01:17:47.963118 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-01-07 01:17:47.963127 | orchestrator | Wednesday 07 January 2026 01:15:15 +0000 (0:00:03.100) 0:02:15.352 ***** 2026-01-07 01:17:47.963137 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-07 01:17:47.963146 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-07 01:17:47.963156 | orchestrator | 2026-01-07 01:17:47.963166 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-01-07 01:17:47.963175 | orchestrator | Wednesday 07 January 2026 01:15:21 +0000 (0:00:05.771) 0:02:21.124 ***** 2026-01-07 01:17:47.963185 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:47.963194 | orchestrator | 2026-01-07 01:17:47.963204 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-01-07 01:17:47.963213 | orchestrator | Wednesday 07 January 2026 01:15:24 +0000 (0:00:03.194) 0:02:24.318 ***** 2026-01-07 01:17:47.963223 | orchestrator | ok: [testbed-node-0] 2026-01-07 01:17:47.963232 | orchestrator | ok: [testbed-node-1] 2026-01-07 01:17:47.963241 | orchestrator | ok: [testbed-node-2] 2026-01-07 01:17:47.963251 | orchestrator | 2026-01-07 01:17:47.963260 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-01-07 01:17:47.963269 | orchestrator | Wednesday 07 January 2026 01:15:24 +0000 (0:00:00.320) 0:02:24.639 ***** 2026-01-07 01:17:47.963282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:47.963326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:47.963343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:47.963354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:47.963364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:47.963374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:47.963385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.963408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.963442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.963458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.963468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.963478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.963489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:47.963504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:47.963536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:47.963547 | orchestrator | 2026-01-07 01:17:47.963558 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-01-07 01:17:47.963567 | orchestrator | Wednesday 07 January 2026 01:15:27 +0000 (0:00:02.372) 0:02:27.011 ***** 2026-01-07 01:17:47.963577 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:47.963587 | orchestrator | 2026-01-07 01:17:47.963597 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-01-07 01:17:47.963607 | orchestrator | Wednesday 07 January 2026 01:15:27 +0000 (0:00:00.141) 0:02:27.153 ***** 2026-01-07 01:17:47.963616 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:47.963626 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:47.963635 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:47.963645 | orchestrator | 2026-01-07 01:17:47.963658 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-01-07 01:17:47.963668 | orchestrator | Wednesday 07 January 2026 01:15:27 +0000 (0:00:00.534) 0:02:27.688 ***** 2026-01-07 01:17:47.963679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:47.963689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:47.963705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.963716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.963726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:47.963736 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:47.963847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:47.963863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:47.963874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.963891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.963901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:47.963911 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:47.963949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:47.963965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:47.963976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.963986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.964001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:47.964011 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:47.964021 | orchestrator | 2026-01-07 01:17:47.964031 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:17:47.964041 | orchestrator | Wednesday 07 January 2026 01:15:28 +0000 (0:00:00.702) 0:02:28.390 ***** 2026-01-07 01:17:47.964051 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-07 01:17:47.964060 | orchestrator | 2026-01-07 01:17:47.964070 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-01-07 01:17:47.964079 | orchestrator | Wednesday 07 January 2026 01:15:29 +0000 (0:00:00.561) 0:02:28.951 ***** 2026-01-07 01:17:47.964090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:47.964121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:47.964130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:47.964143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:47.964152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:47.964160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:47.964169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964274 | orchestrator | 2026-01-07 01:17:47.964282 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-01-07 01:17:47.964290 | orchestrator | Wednesday 07 January 2026 01:15:34 +0000 (0:00:05.537) 0:02:34.489 ***** 2026-01-07 01:17:47.964303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:47.964311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:47.964320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.964328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.964340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:47.964349 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:47.964363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:47.964376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:47.964384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.964393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.964401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:47.964409 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:47.964422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:47.964434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:47.964447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.964456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.964464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:47.964472 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:47.964480 | orchestrator | 2026-01-07 01:17:47.964488 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-01-07 01:17:47.964496 | orchestrator | Wednesday 07 January 2026 01:15:35 +0000 (0:00:01.185) 0:02:35.675 ***** 2026-01-07 01:17:47.964504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:47.964518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:47.964535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.964544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.964552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:47.964560 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:47.964568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:47.964576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:47.964590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.964606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.964615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:47.964623 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:47.964631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-07 01:17:47.964639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-07 01:17:47.964648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.964662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-07 01:17:47.964678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-07 01:17:47.964686 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:47.964694 | orchestrator | 2026-01-07 01:17:47.964702 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-01-07 01:17:47.964710 | orchestrator | Wednesday 07 January 2026 01:15:36 +0000 (0:00:00.925) 0:02:36.601 ***** 2026-01-07 01:17:47.964718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:47.964727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:47.964736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_por2026-01-07 01:17:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:17:47.964749 | orchestrator | t': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:47.964762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:47.964787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:47.964796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:47.964805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:47.964892 | orchestrator | 2026-01-07 01:17:47.964901 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-01-07 01:17:47.964909 | orchestrator | Wednesday 07 January 2026 01:15:41 +0000 (0:00:04.903) 0:02:41.504 ***** 2026-01-07 01:17:47.964917 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-07 01:17:47.964929 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-07 01:17:47.964937 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-07 01:17:47.964945 | orchestrator | 2026-01-07 01:17:47.964953 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-01-07 01:17:47.964961 | orchestrator | Wednesday 07 January 2026 01:15:43 +0000 (0:00:02.058) 0:02:43.563 ***** 2026-01-07 01:17:47.964974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:47.964988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:47.964997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:47.965006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:47.965014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:47.965028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:47.965041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965131 | orchestrator | 2026-01-07 01:17:47.965139 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-01-07 01:17:47.965147 | orchestrator | Wednesday 07 January 2026 01:16:02 +0000 (0:00:19.192) 0:03:02.755 ***** 2026-01-07 01:17:47.965155 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.965163 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:47.965171 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:47.965179 | orchestrator | 2026-01-07 01:17:47.965187 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-01-07 01:17:47.965194 | orchestrator | Wednesday 07 January 2026 01:16:04 +0000 (0:00:01.355) 0:03:04.111 ***** 2026-01-07 01:17:47.965202 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-07 01:17:47.965210 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-07 01:17:47.965218 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-07 01:17:47.965226 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-07 01:17:47.965234 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-07 01:17:47.965242 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-07 01:17:47.965250 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-07 01:17:47.965261 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-07 01:17:47.965269 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-07 01:17:47.965277 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-07 01:17:47.965285 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-07 01:17:47.965293 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-07 01:17:47.965301 | orchestrator | 2026-01-07 01:17:47.965309 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-01-07 01:17:47.965317 | orchestrator | Wednesday 07 January 2026 01:16:09 +0000 (0:00:04.763) 0:03:08.874 ***** 2026-01-07 01:17:47.965324 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-07 01:17:47.965332 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-07 01:17:47.965340 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-07 01:17:47.965348 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-07 01:17:47.965356 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-07 01:17:47.965364 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-07 01:17:47.965372 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-07 01:17:47.965379 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-07 01:17:47.965387 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-07 01:17:47.965395 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-07 01:17:47.965403 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-07 01:17:47.965411 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-07 01:17:47.965418 | orchestrator | 2026-01-07 01:17:47.965426 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-01-07 01:17:47.965434 | orchestrator | Wednesday 07 January 2026 01:16:14 +0000 (0:00:05.069) 0:03:13.944 ***** 2026-01-07 01:17:47.965442 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-07 01:17:47.965450 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-07 01:17:47.965458 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-07 01:17:47.965465 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-07 01:17:47.965473 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-07 01:17:47.965481 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-07 01:17:47.965494 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-07 01:17:47.965502 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-07 01:17:47.965510 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-07 01:17:47.965518 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-07 01:17:47.965526 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-07 01:17:47.965534 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-07 01:17:47.965542 | orchestrator | 2026-01-07 01:17:47.965550 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-01-07 01:17:47.965558 | orchestrator | Wednesday 07 January 2026 01:16:19 +0000 (0:00:04.989) 0:03:18.934 ***** 2026-01-07 01:17:47.965569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:47.965582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:47.965591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-07 01:17:47.965599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:47.965612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:47.965623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-07 01:17:47.965632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-07 01:17:47.965723 | orchestrator | 2026-01-07 01:17:47.965731 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-07 01:17:47.965739 | orchestrator | Wednesday 07 January 2026 01:16:24 +0000 (0:00:05.336) 0:03:24.270 ***** 2026-01-07 01:17:47.965747 | orchestrator | skipping: [testbed-node-0] 2026-01-07 01:17:47.965755 | orchestrator | skipping: [testbed-node-1] 2026-01-07 01:17:47.965763 | orchestrator | skipping: [testbed-node-2] 2026-01-07 01:17:47.965784 | orchestrator | 2026-01-07 01:17:47.965793 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-01-07 01:17:47.965801 | orchestrator | Wednesday 07 January 2026 01:16:25 +0000 (0:00:00.681) 0:03:24.951 ***** 2026-01-07 01:17:47.965808 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.965816 | orchestrator | 2026-01-07 01:17:47.965824 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-01-07 01:17:47.965832 | orchestrator | Wednesday 07 January 2026 01:16:27 +0000 (0:00:02.141) 0:03:27.093 ***** 2026-01-07 01:17:47.965840 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.965848 | orchestrator | 2026-01-07 01:17:47.965856 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-01-07 01:17:47.965864 | orchestrator | Wednesday 07 January 2026 01:16:29 +0000 (0:00:02.181) 0:03:29.274 ***** 2026-01-07 01:17:47.965872 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.965879 | orchestrator | 2026-01-07 01:17:47.965887 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-01-07 01:17:47.965895 | orchestrator | Wednesday 07 January 2026 01:16:31 +0000 (0:00:02.400) 0:03:31.675 ***** 2026-01-07 01:17:47.965903 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.965911 | orchestrator | 2026-01-07 01:17:47.965919 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-01-07 01:17:47.965927 | orchestrator | Wednesday 07 January 2026 01:16:34 +0000 (0:00:02.283) 0:03:33.958 ***** 2026-01-07 01:17:47.965935 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.965943 | orchestrator | 2026-01-07 01:17:47.965950 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-07 01:17:47.965958 | orchestrator | Wednesday 07 January 2026 01:16:54 +0000 (0:00:20.771) 0:03:54.729 ***** 2026-01-07 01:17:47.965966 | orchestrator | 2026-01-07 01:17:47.965974 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-07 01:17:47.965986 | orchestrator | Wednesday 07 January 2026 01:16:54 +0000 (0:00:00.064) 0:03:54.794 ***** 2026-01-07 01:17:47.965994 | orchestrator | 2026-01-07 01:17:47.966002 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-07 01:17:47.966036 | orchestrator | Wednesday 07 January 2026 01:16:55 +0000 (0:00:00.080) 0:03:54.875 ***** 2026-01-07 01:17:47.966046 | orchestrator | 2026-01-07 01:17:47.966054 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-01-07 01:17:47.966062 | orchestrator | Wednesday 07 January 2026 01:16:55 +0000 (0:00:00.074) 0:03:54.949 ***** 2026-01-07 01:17:47.966070 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.966078 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:47.966086 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:47.966093 | orchestrator | 2026-01-07 01:17:47.966101 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-01-07 01:17:47.966109 | orchestrator | Wednesday 07 January 2026 01:17:09 +0000 (0:00:14.704) 0:04:09.653 ***** 2026-01-07 01:17:47.966117 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.966125 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:47.966132 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:47.966140 | orchestrator | 2026-01-07 01:17:47.966151 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-01-07 01:17:47.966160 | orchestrator | Wednesday 07 January 2026 01:17:16 +0000 (0:00:06.414) 0:04:16.067 ***** 2026-01-07 01:17:47.966167 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.966175 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:47.966183 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:47.966190 | orchestrator | 2026-01-07 01:17:47.966198 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-01-07 01:17:47.966206 | orchestrator | Wednesday 07 January 2026 01:17:26 +0000 (0:00:10.407) 0:04:26.475 ***** 2026-01-07 01:17:47.966213 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.966221 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:47.966229 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:47.966236 | orchestrator | 2026-01-07 01:17:47.966244 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-01-07 01:17:47.966252 | orchestrator | Wednesday 07 January 2026 01:17:36 +0000 (0:00:09.925) 0:04:36.400 ***** 2026-01-07 01:17:47.966260 | orchestrator | changed: [testbed-node-1] 2026-01-07 01:17:47.966268 | orchestrator | changed: [testbed-node-2] 2026-01-07 01:17:47.966275 | orchestrator | changed: [testbed-node-0] 2026-01-07 01:17:47.966283 | orchestrator | 2026-01-07 01:17:47.966291 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:17:47.966299 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-07 01:17:47.966307 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 01:17:47.966315 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-07 01:17:47.966323 | orchestrator | 2026-01-07 01:17:47.966331 | orchestrator | 2026-01-07 01:17:47.966339 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:17:47.966347 | orchestrator | Wednesday 07 January 2026 01:17:45 +0000 (0:00:08.740) 0:04:45.141 ***** 2026-01-07 01:17:47.966354 | orchestrator | =============================================================================== 2026-01-07 01:17:47.966362 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.77s 2026-01-07 01:17:47.966370 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 19.19s 2026-01-07 01:17:47.966378 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.38s 2026-01-07 01:17:47.966390 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.00s 2026-01-07 01:17:47.966398 | orchestrator | octavia : Restart octavia-api container -------------------------------- 14.70s 2026-01-07 01:17:47.966406 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.41s 2026-01-07 01:17:47.966413 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.23s 2026-01-07 01:17:47.966421 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 9.93s 2026-01-07 01:17:47.966429 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 8.74s 2026-01-07 01:17:47.966436 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.36s 2026-01-07 01:17:47.966444 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.14s 2026-01-07 01:17:47.966452 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 6.51s 2026-01-07 01:17:47.966460 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.41s 2026-01-07 01:17:47.966467 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.01s 2026-01-07 01:17:47.966475 | orchestrator | octavia : Get security groups for octavia ------------------------------- 5.77s 2026-01-07 01:17:47.966483 | orchestrator | octavia : Create loadbalancer management network ------------------------ 5.57s 2026-01-07 01:17:47.966490 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.54s 2026-01-07 01:17:47.966498 | orchestrator | octavia : Create nova keypair for amphora ------------------------------- 5.52s 2026-01-07 01:17:47.966506 | orchestrator | octavia : Check octavia containers -------------------------------------- 5.34s 2026-01-07 01:17:47.966514 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.07s 2026-01-07 01:17:51.006172 | orchestrator | 2026-01-07 01:17:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:17:54.057336 | orchestrator | 2026-01-07 01:17:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:17:57.093117 | orchestrator | 2026-01-07 01:17:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:00.136266 | orchestrator | 2026-01-07 01:18:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:03.184231 | orchestrator | 2026-01-07 01:18:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:06.227279 | orchestrator | 2026-01-07 01:18:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:09.268690 | orchestrator | 2026-01-07 01:18:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:12.306579 | orchestrator | 2026-01-07 01:18:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:15.357405 | orchestrator | 2026-01-07 01:18:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:18.400142 | orchestrator | 2026-01-07 01:18:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:21.448147 | orchestrator | 2026-01-07 01:18:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:24.493003 | orchestrator | 2026-01-07 01:18:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:27.537125 | orchestrator | 2026-01-07 01:18:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:30.570817 | orchestrator | 2026-01-07 01:18:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:33.617260 | orchestrator | 2026-01-07 01:18:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:36.660350 | orchestrator | 2026-01-07 01:18:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:39.705305 | orchestrator | 2026-01-07 01:18:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:42.745018 | orchestrator | 2026-01-07 01:18:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:45.782166 | orchestrator | 2026-01-07 01:18:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-07 01:18:48.825773 | orchestrator | 2026-01-07 01:18:49.169425 | orchestrator | 2026-01-07 01:18:49.176446 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed Jan 7 01:18:49 UTC 2026 2026-01-07 01:18:49.176521 | orchestrator | 2026-01-07 01:18:49.593823 | orchestrator | ok: Runtime: 0:36:28.145033 2026-01-07 01:18:49.872537 | 2026-01-07 01:18:49.872683 | TASK [Bootstrap services] 2026-01-07 01:18:50.672398 | orchestrator | 2026-01-07 01:18:50.672503 | orchestrator | # BOOTSTRAP 2026-01-07 01:18:50.672514 | orchestrator | 2026-01-07 01:18:50.672519 | orchestrator | + set -e 2026-01-07 01:18:50.672525 | orchestrator | + echo 2026-01-07 01:18:50.672531 | orchestrator | + echo '# BOOTSTRAP' 2026-01-07 01:18:50.672538 | orchestrator | + echo 2026-01-07 01:18:50.672557 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-01-07 01:18:50.681170 | orchestrator | + set -e 2026-01-07 01:18:50.681242 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-01-07 01:18:55.199297 | orchestrator | 2026-01-07 01:18:55 | INFO  | It takes a moment until task 7c692aee-db06-4ffc-91b1-436ead678759 (flavor-manager) has been started and output is visible here. 2026-01-07 01:19:03.316804 | orchestrator | 2026-01-07 01:18:58 | INFO  | Flavor SCS-1L-1 created 2026-01-07 01:19:03.316946 | orchestrator | 2026-01-07 01:18:58 | INFO  | Flavor SCS-1L-1-5 created 2026-01-07 01:19:03.316958 | orchestrator | 2026-01-07 01:18:58 | INFO  | Flavor SCS-1V-2 created 2026-01-07 01:19:03.316962 | orchestrator | 2026-01-07 01:18:59 | INFO  | Flavor SCS-1V-2-5 created 2026-01-07 01:19:03.316966 | orchestrator | 2026-01-07 01:18:59 | INFO  | Flavor SCS-1V-4 created 2026-01-07 01:19:03.316971 | orchestrator | 2026-01-07 01:18:59 | INFO  | Flavor SCS-1V-4-10 created 2026-01-07 01:19:03.316975 | orchestrator | 2026-01-07 01:18:59 | INFO  | Flavor SCS-1V-8 created 2026-01-07 01:19:03.316980 | orchestrator | 2026-01-07 01:19:00 | INFO  | Flavor SCS-1V-8-20 created 2026-01-07 01:19:03.316994 | orchestrator | 2026-01-07 01:19:00 | INFO  | Flavor SCS-2V-4 created 2026-01-07 01:19:03.316998 | orchestrator | 2026-01-07 01:19:00 | INFO  | Flavor SCS-2V-4-10 created 2026-01-07 01:19:03.317002 | orchestrator | 2026-01-07 01:19:00 | INFO  | Flavor SCS-2V-8 created 2026-01-07 01:19:03.317006 | orchestrator | 2026-01-07 01:19:00 | INFO  | Flavor SCS-2V-8-20 created 2026-01-07 01:19:03.317010 | orchestrator | 2026-01-07 01:19:00 | INFO  | Flavor SCS-2V-16 created 2026-01-07 01:19:03.317014 | orchestrator | 2026-01-07 01:19:00 | INFO  | Flavor SCS-2V-16-50 created 2026-01-07 01:19:03.317017 | orchestrator | 2026-01-07 01:19:00 | INFO  | Flavor SCS-4V-8 created 2026-01-07 01:19:03.317021 | orchestrator | 2026-01-07 01:19:01 | INFO  | Flavor SCS-4V-8-20 created 2026-01-07 01:19:03.317025 | orchestrator | 2026-01-07 01:19:01 | INFO  | Flavor SCS-4V-16 created 2026-01-07 01:19:03.317029 | orchestrator | 2026-01-07 01:19:01 | INFO  | Flavor SCS-4V-16-50 created 2026-01-07 01:19:03.317033 | orchestrator | 2026-01-07 01:19:01 | INFO  | Flavor SCS-4V-32 created 2026-01-07 01:19:03.317037 | orchestrator | 2026-01-07 01:19:01 | INFO  | Flavor SCS-4V-32-100 created 2026-01-07 01:19:03.317041 | orchestrator | 2026-01-07 01:19:01 | INFO  | Flavor SCS-8V-16 created 2026-01-07 01:19:03.317044 | orchestrator | 2026-01-07 01:19:01 | INFO  | Flavor SCS-8V-16-50 created 2026-01-07 01:19:03.317049 | orchestrator | 2026-01-07 01:19:02 | INFO  | Flavor SCS-8V-32 created 2026-01-07 01:19:03.317052 | orchestrator | 2026-01-07 01:19:02 | INFO  | Flavor SCS-8V-32-100 created 2026-01-07 01:19:03.317056 | orchestrator | 2026-01-07 01:19:02 | INFO  | Flavor SCS-16V-32 created 2026-01-07 01:19:03.317060 | orchestrator | 2026-01-07 01:19:02 | INFO  | Flavor SCS-16V-32-100 created 2026-01-07 01:19:03.317064 | orchestrator | 2026-01-07 01:19:02 | INFO  | Flavor SCS-2V-4-20s created 2026-01-07 01:19:03.317067 | orchestrator | 2026-01-07 01:19:02 | INFO  | Flavor SCS-4V-8-50s created 2026-01-07 01:19:03.317071 | orchestrator | 2026-01-07 01:19:03 | INFO  | Flavor SCS-8V-32-100s created 2026-01-07 01:19:05.637362 | orchestrator | 2026-01-07 01:19:05 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-01-07 01:19:15.748975 | orchestrator | 2026-01-07 01:19:15 | INFO  | Task 9dd1e8e6-0114-4800-bbf7-7b93cd823fe1 (bootstrap-basic) was prepared for execution. 2026-01-07 01:19:15.749030 | orchestrator | 2026-01-07 01:19:15 | INFO  | It takes a moment until task 9dd1e8e6-0114-4800-bbf7-7b93cd823fe1 (bootstrap-basic) has been started and output is visible here. 2026-01-07 01:20:02.239245 | orchestrator | 2026-01-07 01:20:02.239335 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-01-07 01:20:02.239349 | orchestrator | 2026-01-07 01:20:02.239357 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-07 01:20:02.239364 | orchestrator | Wednesday 07 January 2026 01:19:20 +0000 (0:00:00.069) 0:00:00.069 ***** 2026-01-07 01:20:02.239371 | orchestrator | ok: [localhost] 2026-01-07 01:20:02.239380 | orchestrator | 2026-01-07 01:20:02.239386 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-01-07 01:20:02.239392 | orchestrator | Wednesday 07 January 2026 01:19:22 +0000 (0:00:01.960) 0:00:02.030 ***** 2026-01-07 01:20:02.239399 | orchestrator | ok: [localhost] 2026-01-07 01:20:02.239405 | orchestrator | 2026-01-07 01:20:02.239411 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-01-07 01:20:02.239417 | orchestrator | Wednesday 07 January 2026 01:19:31 +0000 (0:00:09.241) 0:00:11.271 ***** 2026-01-07 01:20:02.239424 | orchestrator | changed: [localhost] 2026-01-07 01:20:02.239431 | orchestrator | 2026-01-07 01:20:02.239437 | orchestrator | TASK [Create public network] *************************************************** 2026-01-07 01:20:02.239444 | orchestrator | Wednesday 07 January 2026 01:19:39 +0000 (0:00:07.917) 0:00:19.189 ***** 2026-01-07 01:20:02.239450 | orchestrator | changed: [localhost] 2026-01-07 01:20:02.239456 | orchestrator | 2026-01-07 01:20:02.239461 | orchestrator | TASK [Set public network to default] ******************************************* 2026-01-07 01:20:02.239467 | orchestrator | Wednesday 07 January 2026 01:19:44 +0000 (0:00:04.748) 0:00:23.937 ***** 2026-01-07 01:20:02.239478 | orchestrator | changed: [localhost] 2026-01-07 01:20:02.239486 | orchestrator | 2026-01-07 01:20:02.239492 | orchestrator | TASK [Create public subnet] **************************************************** 2026-01-07 01:20:02.239500 | orchestrator | Wednesday 07 January 2026 01:19:50 +0000 (0:00:06.199) 0:00:30.137 ***** 2026-01-07 01:20:02.239504 | orchestrator | changed: [localhost] 2026-01-07 01:20:02.239508 | orchestrator | 2026-01-07 01:20:02.239512 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-01-07 01:20:02.239517 | orchestrator | Wednesday 07 January 2026 01:19:54 +0000 (0:00:04.311) 0:00:34.448 ***** 2026-01-07 01:20:02.239520 | orchestrator | changed: [localhost] 2026-01-07 01:20:02.239524 | orchestrator | 2026-01-07 01:20:02.239528 | orchestrator | TASK [Create manager role] ***************************************************** 2026-01-07 01:20:02.239540 | orchestrator | Wednesday 07 January 2026 01:19:58 +0000 (0:00:03.807) 0:00:38.255 ***** 2026-01-07 01:20:02.239544 | orchestrator | ok: [localhost] 2026-01-07 01:20:02.239548 | orchestrator | 2026-01-07 01:20:02.239552 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-07 01:20:02.239557 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-07 01:20:02.239562 | orchestrator | 2026-01-07 01:20:02.239566 | orchestrator | 2026-01-07 01:20:02.239570 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-07 01:20:02.239573 | orchestrator | Wednesday 07 January 2026 01:20:01 +0000 (0:00:03.595) 0:00:41.851 ***** 2026-01-07 01:20:02.239577 | orchestrator | =============================================================================== 2026-01-07 01:20:02.239581 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.24s 2026-01-07 01:20:02.239585 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.92s 2026-01-07 01:20:02.239589 | orchestrator | Set public network to default ------------------------------------------- 6.20s 2026-01-07 01:20:02.239592 | orchestrator | Create public network --------------------------------------------------- 4.75s 2026-01-07 01:20:02.239611 | orchestrator | Create public subnet ---------------------------------------------------- 4.31s 2026-01-07 01:20:02.239615 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.81s 2026-01-07 01:20:02.239619 | orchestrator | Create manager role ----------------------------------------------------- 3.60s 2026-01-07 01:20:02.239623 | orchestrator | Gathering Facts --------------------------------------------------------- 1.96s 2026-01-07 01:20:04.707581 | orchestrator | 2026-01-07 01:20:04 | INFO  | It takes a moment until task c1891b1a-e71d-4318-a2b3-c7baf37d56f3 (image-manager) has been started and output is visible here. 2026-01-07 01:20:46.942725 | orchestrator | 2026-01-07 01:20:07 | INFO  | Processing image 'Cirros 0.6.2' 2026-01-07 01:20:46.942819 | orchestrator | 2026-01-07 01:20:07 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-01-07 01:20:46.942830 | orchestrator | 2026-01-07 01:20:07 | INFO  | Importing image Cirros 0.6.2 2026-01-07 01:20:46.942838 | orchestrator | 2026-01-07 01:20:07 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-07 01:20:46.942846 | orchestrator | 2026-01-07 01:20:10 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:20:46.942853 | orchestrator | 2026-01-07 01:20:12 | INFO  | Waiting for import to complete... 2026-01-07 01:20:46.942859 | orchestrator | 2026-01-07 01:20:22 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-01-07 01:20:46.942882 | orchestrator | 2026-01-07 01:20:22 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-01-07 01:20:46.942889 | orchestrator | 2026-01-07 01:20:22 | INFO  | Setting internal_version = 0.6.2 2026-01-07 01:20:46.942897 | orchestrator | 2026-01-07 01:20:22 | INFO  | Setting image_original_user = cirros 2026-01-07 01:20:46.942904 | orchestrator | 2026-01-07 01:20:22 | INFO  | Adding tag os:cirros 2026-01-07 01:20:46.942911 | orchestrator | 2026-01-07 01:20:22 | INFO  | Setting property architecture: x86_64 2026-01-07 01:20:46.942917 | orchestrator | 2026-01-07 01:20:23 | INFO  | Setting property hw_disk_bus: scsi 2026-01-07 01:20:46.942926 | orchestrator | 2026-01-07 01:20:23 | INFO  | Setting property hw_rng_model: virtio 2026-01-07 01:20:46.942936 | orchestrator | 2026-01-07 01:20:23 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-07 01:20:46.942946 | orchestrator | 2026-01-07 01:20:23 | INFO  | Setting property hw_watchdog_action: reset 2026-01-07 01:20:46.942961 | orchestrator | 2026-01-07 01:20:24 | INFO  | Setting property hypervisor_type: qemu 2026-01-07 01:20:46.943055 | orchestrator | 2026-01-07 01:20:24 | INFO  | Setting property os_distro: cirros 2026-01-07 01:20:46.943067 | orchestrator | 2026-01-07 01:20:24 | INFO  | Setting property os_purpose: minimal 2026-01-07 01:20:46.943077 | orchestrator | 2026-01-07 01:20:24 | INFO  | Setting property replace_frequency: never 2026-01-07 01:20:46.943087 | orchestrator | 2026-01-07 01:20:25 | INFO  | Setting property uuid_validity: none 2026-01-07 01:20:46.943096 | orchestrator | 2026-01-07 01:20:25 | INFO  | Setting property provided_until: none 2026-01-07 01:20:46.943107 | orchestrator | 2026-01-07 01:20:25 | INFO  | Setting property image_description: Cirros 2026-01-07 01:20:46.943117 | orchestrator | 2026-01-07 01:20:25 | INFO  | Setting property image_name: Cirros 2026-01-07 01:20:46.943128 | orchestrator | 2026-01-07 01:20:25 | INFO  | Setting property internal_version: 0.6.2 2026-01-07 01:20:46.943139 | orchestrator | 2026-01-07 01:20:26 | INFO  | Setting property image_original_user: cirros 2026-01-07 01:20:46.943175 | orchestrator | 2026-01-07 01:20:26 | INFO  | Setting property os_version: 0.6.2 2026-01-07 01:20:46.943192 | orchestrator | 2026-01-07 01:20:26 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-07 01:20:46.943202 | orchestrator | 2026-01-07 01:20:26 | INFO  | Setting property image_build_date: 2023-05-30 2026-01-07 01:20:46.943210 | orchestrator | 2026-01-07 01:20:27 | INFO  | Checking status of 'Cirros 0.6.2' 2026-01-07 01:20:46.943217 | orchestrator | 2026-01-07 01:20:27 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-01-07 01:20:46.943224 | orchestrator | 2026-01-07 01:20:27 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-01-07 01:20:46.943232 | orchestrator | 2026-01-07 01:20:27 | INFO  | Processing image 'Cirros 0.6.3' 2026-01-07 01:20:46.943243 | orchestrator | 2026-01-07 01:20:27 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-01-07 01:20:46.943250 | orchestrator | 2026-01-07 01:20:27 | INFO  | Importing image Cirros 0.6.3 2026-01-07 01:20:46.943258 | orchestrator | 2026-01-07 01:20:27 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-07 01:20:46.943265 | orchestrator | 2026-01-07 01:20:29 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:20:46.943272 | orchestrator | 2026-01-07 01:20:31 | INFO  | Waiting for import to complete... 2026-01-07 01:20:46.943294 | orchestrator | 2026-01-07 01:20:41 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-01-07 01:20:46.943302 | orchestrator | 2026-01-07 01:20:42 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-01-07 01:20:46.943309 | orchestrator | 2026-01-07 01:20:42 | INFO  | Setting internal_version = 0.6.3 2026-01-07 01:20:46.943316 | orchestrator | 2026-01-07 01:20:42 | INFO  | Setting image_original_user = cirros 2026-01-07 01:20:46.943323 | orchestrator | 2026-01-07 01:20:42 | INFO  | Adding tag os:cirros 2026-01-07 01:20:46.943331 | orchestrator | 2026-01-07 01:20:42 | INFO  | Setting property architecture: x86_64 2026-01-07 01:20:46.943338 | orchestrator | 2026-01-07 01:20:42 | INFO  | Setting property hw_disk_bus: scsi 2026-01-07 01:20:46.943345 | orchestrator | 2026-01-07 01:20:42 | INFO  | Setting property hw_rng_model: virtio 2026-01-07 01:20:46.943352 | orchestrator | 2026-01-07 01:20:42 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-07 01:20:46.943359 | orchestrator | 2026-01-07 01:20:43 | INFO  | Setting property hw_watchdog_action: reset 2026-01-07 01:20:46.943367 | orchestrator | 2026-01-07 01:20:43 | INFO  | Setting property hypervisor_type: qemu 2026-01-07 01:20:46.943374 | orchestrator | 2026-01-07 01:20:43 | INFO  | Setting property os_distro: cirros 2026-01-07 01:20:46.943381 | orchestrator | 2026-01-07 01:20:43 | INFO  | Setting property os_purpose: minimal 2026-01-07 01:20:46.943388 | orchestrator | 2026-01-07 01:20:43 | INFO  | Setting property replace_frequency: never 2026-01-07 01:20:46.943396 | orchestrator | 2026-01-07 01:20:44 | INFO  | Setting property uuid_validity: none 2026-01-07 01:20:46.943403 | orchestrator | 2026-01-07 01:20:44 | INFO  | Setting property provided_until: none 2026-01-07 01:20:46.943420 | orchestrator | 2026-01-07 01:20:44 | INFO  | Setting property image_description: Cirros 2026-01-07 01:20:46.943427 | orchestrator | 2026-01-07 01:20:44 | INFO  | Setting property image_name: Cirros 2026-01-07 01:20:46.943434 | orchestrator | 2026-01-07 01:20:45 | INFO  | Setting property internal_version: 0.6.3 2026-01-07 01:20:46.943447 | orchestrator | 2026-01-07 01:20:45 | INFO  | Setting property image_original_user: cirros 2026-01-07 01:20:46.943454 | orchestrator | 2026-01-07 01:20:45 | INFO  | Setting property os_version: 0.6.3 2026-01-07 01:20:46.943461 | orchestrator | 2026-01-07 01:20:45 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-07 01:20:46.943473 | orchestrator | 2026-01-07 01:20:45 | INFO  | Setting property image_build_date: 2024-09-26 2026-01-07 01:20:46.943486 | orchestrator | 2026-01-07 01:20:46 | INFO  | Checking status of 'Cirros 0.6.3' 2026-01-07 01:20:46.943503 | orchestrator | 2026-01-07 01:20:46 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-01-07 01:20:46.943514 | orchestrator | 2026-01-07 01:20:46 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-01-07 01:20:47.298212 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-01-07 01:20:49.620704 | orchestrator | 2026-01-07 01:20:49 | INFO  | date: 2026-01-06 2026-01-07 01:20:49.620844 | orchestrator | 2026-01-07 01:20:49 | INFO  | image: octavia-amphora-haproxy-2024.2.20260106.qcow2 2026-01-07 01:20:49.620903 | orchestrator | 2026-01-07 01:20:49 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260106.qcow2 2026-01-07 01:20:49.620923 | orchestrator | 2026-01-07 01:20:49 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260106.qcow2.CHECKSUM 2026-01-07 01:20:49.805703 | orchestrator | 2026-01-07 01:20:49 | INFO  | checksum: ccaeac20334f3bd9ba5bef5fa32ee255e2acf964566127f89d3d6aa5eef5b38f 2026-01-07 01:20:49.891889 | orchestrator | 2026-01-07 01:20:49 | INFO  | It takes a moment until task 65662033-0c02-4409-8011-6e25767f9fac (image-manager) has been started and output is visible here. 2026-01-07 01:22:04.862675 | orchestrator | 2026-01-07 01:20:52 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-01-06' 2026-01-07 01:22:04.862778 | orchestrator | 2026-01-07 01:20:52 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260106.qcow2: 200 2026-01-07 01:22:04.862790 | orchestrator | 2026-01-07 01:20:52 | INFO  | Importing image OpenStack Octavia Amphora 2026-01-06 2026-01-07 01:22:04.862797 | orchestrator | 2026-01-07 01:20:52 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260106.qcow2 2026-01-07 01:22:04.862804 | orchestrator | 2026-01-07 01:20:53 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:22:04.862811 | orchestrator | 2026-01-07 01:20:55 | INFO  | Waiting for import to complete... 2026-01-07 01:22:04.862821 | orchestrator | 2026-01-07 01:21:05 | INFO  | Waiting for import to complete... 2026-01-07 01:22:04.862828 | orchestrator | 2026-01-07 01:21:15 | INFO  | Waiting for import to complete... 2026-01-07 01:22:04.862834 | orchestrator | 2026-01-07 01:21:26 | INFO  | Waiting for import to complete... 2026-01-07 01:22:04.862842 | orchestrator | 2026-01-07 01:21:36 | INFO  | Waiting for import to complete... 2026-01-07 01:22:04.862849 | orchestrator | 2026-01-07 01:21:46 | INFO  | Waiting for import to complete... 2026-01-07 01:22:04.862855 | orchestrator | 2026-01-07 01:21:56 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:22:04.862862 | orchestrator | 2026-01-07 01:21:58 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:22:04.862869 | orchestrator | 2026-01-07 01:22:00 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:22:04.862900 | orchestrator | 2026-01-07 01:22:02 | INFO  | Waiting for image to leave queued state... 2026-01-07 01:22:04.862905 | orchestrator | 2026-01-07 01:22:04 | ERROR  | Image OpenStack Octavia Amphora 2026-01-06 seems stuck in queued state 2026-01-07 01:22:04.862910 | orchestrator | 2026-01-07 01:22:04 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-01-07 01:22:04.862915 | orchestrator | 2026-01-07 01:22:04 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-01-07 01:22:04.862919 | orchestrator | 2026-01-07 01:22:04 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-01-07 01:22:04.862922 | orchestrator | 2026-01-07 01:22:04 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-01-07 01:22:04.862926 | orchestrator | 2026-01-07 01:22:04.862931 | orchestrator | ERROR: One or more errors occurred during the execution of the program, please check the output. 2026-01-07 01:22:05.575742 | orchestrator | ERROR 2026-01-07 01:22:05.576158 | orchestrator | { 2026-01-07 01:22:05.576267 | orchestrator | "delta": "0:03:15.016898", 2026-01-07 01:22:05.576336 | orchestrator | "end": "2026-01-07 01:22:05.287568", 2026-01-07 01:22:05.576395 | orchestrator | "msg": "non-zero return code", 2026-01-07 01:22:05.576484 | orchestrator | "rc": 1, 2026-01-07 01:22:05.576537 | orchestrator | "start": "2026-01-07 01:18:50.270670" 2026-01-07 01:22:05.576587 | orchestrator | } failure 2026-01-07 01:22:05.594455 | 2026-01-07 01:22:05.594617 | PLAY RECAP 2026-01-07 01:22:05.594705 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2026-01-07 01:22:05.594747 | 2026-01-07 01:22:05.838210 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-07 01:22:05.839462 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-07 01:22:06.717016 | 2026-01-07 01:22:06.717345 | PLAY [Post output play] 2026-01-07 01:22:06.759106 | 2026-01-07 01:22:06.759470 | LOOP [stage-output : Register sources] 2026-01-07 01:22:06.833683 | 2026-01-07 01:22:06.834038 | TASK [stage-output : Check sudo] 2026-01-07 01:22:07.734331 | orchestrator | sudo: a password is required 2026-01-07 01:22:07.882482 | orchestrator | ok: Runtime: 0:00:00.013196 2026-01-07 01:22:07.897231 | 2026-01-07 01:22:07.897397 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-07 01:22:07.934991 | 2026-01-07 01:22:07.935285 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-07 01:22:08.011058 | orchestrator | ok 2026-01-07 01:22:08.019654 | 2026-01-07 01:22:08.019805 | LOOP [stage-output : Ensure target folders exist] 2026-01-07 01:22:08.546192 | orchestrator | ok: "docs" 2026-01-07 01:22:08.546745 | 2026-01-07 01:22:08.811952 | orchestrator | ok: "artifacts" 2026-01-07 01:22:09.127274 | orchestrator | ok: "logs" 2026-01-07 01:22:09.147647 | 2026-01-07 01:22:09.147857 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-07 01:22:09.190809 | 2026-01-07 01:22:09.191176 | TASK [stage-output : Make all log files readable] 2026-01-07 01:22:09.535150 | orchestrator | ok 2026-01-07 01:22:09.543943 | 2026-01-07 01:22:09.544109 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-07 01:22:09.578968 | orchestrator | skipping: Conditional result was False 2026-01-07 01:22:09.594289 | 2026-01-07 01:22:09.594510 | TASK [stage-output : Discover log files for compression] 2026-01-07 01:22:09.620211 | orchestrator | skipping: Conditional result was False 2026-01-07 01:22:09.634382 | 2026-01-07 01:22:09.634579 | LOOP [stage-output : Archive everything from logs] 2026-01-07 01:22:09.678245 | 2026-01-07 01:22:09.678503 | PLAY [Post cleanup play] 2026-01-07 01:22:09.688897 | 2026-01-07 01:22:09.689046 | TASK [Set cloud fact (Zuul deployment)] 2026-01-07 01:22:09.750457 | orchestrator | ok 2026-01-07 01:22:09.763950 | 2026-01-07 01:22:09.764111 | TASK [Set cloud fact (local deployment)] 2026-01-07 01:22:09.809502 | orchestrator | skipping: Conditional result was False 2026-01-07 01:22:09.831988 | 2026-01-07 01:22:09.832234 | TASK [Clean the cloud environment] 2026-01-07 01:22:10.531919 | orchestrator | 2026-01-07 01:22:10 - clean up servers 2026-01-07 01:22:11.287961 | orchestrator | 2026-01-07 01:22:11 - testbed-manager 2026-01-07 01:22:11.367158 | orchestrator | 2026-01-07 01:22:11 - testbed-node-4 2026-01-07 01:22:11.466107 | orchestrator | 2026-01-07 01:22:11 - testbed-node-1 2026-01-07 01:22:11.552255 | orchestrator | 2026-01-07 01:22:11 - testbed-node-2 2026-01-07 01:22:11.643333 | orchestrator | 2026-01-07 01:22:11 - testbed-node-5 2026-01-07 01:22:11.730739 | orchestrator | 2026-01-07 01:22:11 - testbed-node-3 2026-01-07 01:22:11.814165 | orchestrator | 2026-01-07 01:22:11 - testbed-node-0 2026-01-07 01:22:11.897062 | orchestrator | 2026-01-07 01:22:11 - clean up keypairs 2026-01-07 01:22:11.913521 | orchestrator | 2026-01-07 01:22:11 - testbed 2026-01-07 01:22:11.935667 | orchestrator | 2026-01-07 01:22:11 - wait for servers to be gone 2026-01-07 01:22:24.951281 | orchestrator | 2026-01-07 01:22:24 - clean up ports 2026-01-07 01:22:25.152195 | orchestrator | 2026-01-07 01:22:25 - 087bf1d4-7baf-4860-aadb-082c79e04997 2026-01-07 01:22:25.660708 | orchestrator | 2026-01-07 01:22:25 - 1d390051-621b-402a-adcc-ef9660b8991d 2026-01-07 01:22:25.913942 | orchestrator | 2026-01-07 01:22:25 - 672c77f6-3467-46f1-8b92-f6f388db0ebb 2026-01-07 01:22:26.112179 | orchestrator | 2026-01-07 01:22:26 - 9787c67e-2a76-454b-94cf-a2708f217604 2026-01-07 01:22:26.347903 | orchestrator | 2026-01-07 01:22:26 - 9e0a5533-5f8d-489e-94ff-38eb22ccd2ab 2026-01-07 01:22:26.552602 | orchestrator | 2026-01-07 01:22:26 - bd3ac206-1800-4338-9160-fc231d97d193 2026-01-07 01:22:26.785006 | orchestrator | 2026-01-07 01:22:26 - c23c76e2-3b58-4fb3-9860-8e71eb9b7cd6 2026-01-07 01:22:27.013365 | orchestrator | 2026-01-07 01:22:27 - clean up volumes 2026-01-07 01:22:27.132918 | orchestrator | 2026-01-07 01:22:27 - testbed-volume-1-node-base 2026-01-07 01:22:27.171546 | orchestrator | 2026-01-07 01:22:27 - testbed-volume-5-node-base 2026-01-07 01:22:27.216153 | orchestrator | 2026-01-07 01:22:27 - testbed-volume-0-node-base 2026-01-07 01:22:27.263744 | orchestrator | 2026-01-07 01:22:27 - testbed-volume-3-node-base 2026-01-07 01:22:27.307045 | orchestrator | 2026-01-07 01:22:27 - testbed-volume-2-node-base 2026-01-07 01:22:27.348844 | orchestrator | 2026-01-07 01:22:27 - testbed-volume-4-node-base 2026-01-07 01:22:27.392264 | orchestrator | 2026-01-07 01:22:27 - testbed-volume-manager-base 2026-01-07 01:22:27.433342 | orchestrator | 2026-01-07 01:22:27 - testbed-volume-4-node-4 2026-01-07 01:22:27.474720 | orchestrator | 2026-01-07 01:22:27 - testbed-volume-7-node-4 2026-01-07 01:22:27.518549 | orchestrator | 2026-01-07 01:22:27 - testbed-volume-6-node-3 2026-01-07 01:22:27.565200 | orchestrator | 2026-01-07 01:22:27 - testbed-volume-8-node-5 2026-01-07 01:22:27.607948 | orchestrator | 2026-01-07 01:22:27 - testbed-volume-2-node-5 2026-01-07 01:22:27.647497 | orchestrator | 2026-01-07 01:22:27 - testbed-volume-5-node-5 2026-01-07 01:22:27.690376 | orchestrator | 2026-01-07 01:22:27 - testbed-volume-1-node-4 2026-01-07 01:22:27.734514 | orchestrator | 2026-01-07 01:22:27 - testbed-volume-3-node-3 2026-01-07 01:22:27.773003 | orchestrator | 2026-01-07 01:22:27 - testbed-volume-0-node-3 2026-01-07 01:22:27.813207 | orchestrator | 2026-01-07 01:22:27 - disconnect routers 2026-01-07 01:22:27.931211 | orchestrator | 2026-01-07 01:22:27 - testbed 2026-01-07 01:22:28.905516 | orchestrator | 2026-01-07 01:22:28 - clean up subnets 2026-01-07 01:22:28.941607 | orchestrator | 2026-01-07 01:22:28 - subnet-testbed-management 2026-01-07 01:22:29.103248 | orchestrator | 2026-01-07 01:22:29 - clean up networks 2026-01-07 01:22:29.717561 | orchestrator | 2026-01-07 01:22:29 - net-testbed-management 2026-01-07 01:22:30.036987 | orchestrator | 2026-01-07 01:22:30 - clean up security groups 2026-01-07 01:22:30.081469 | orchestrator | 2026-01-07 01:22:30 - testbed-node 2026-01-07 01:22:30.210751 | orchestrator | 2026-01-07 01:22:30 - testbed-management 2026-01-07 01:22:30.322491 | orchestrator | 2026-01-07 01:22:30 - clean up floating ips 2026-01-07 01:22:30.354659 | orchestrator | 2026-01-07 01:22:30 - 81.163.193.241 2026-01-07 01:22:30.706991 | orchestrator | 2026-01-07 01:22:30 - clean up routers 2026-01-07 01:22:30.806455 | orchestrator | 2026-01-07 01:22:30 - testbed 2026-01-07 01:22:32.401608 | orchestrator | ok: Runtime: 0:00:21.946147 2026-01-07 01:22:32.406146 | 2026-01-07 01:22:32.406316 | PLAY RECAP 2026-01-07 01:22:32.406539 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-07 01:22:32.406610 | 2026-01-07 01:22:32.557018 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-07 01:22:32.559481 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-07 01:22:33.339778 | 2026-01-07 01:22:33.339954 | PLAY [Cleanup play] 2026-01-07 01:22:33.357144 | 2026-01-07 01:22:33.357304 | TASK [Set cloud fact (Zuul deployment)] 2026-01-07 01:22:33.415717 | orchestrator | ok 2026-01-07 01:22:33.425505 | 2026-01-07 01:22:33.425709 | TASK [Set cloud fact (local deployment)] 2026-01-07 01:22:33.460936 | orchestrator | skipping: Conditional result was False 2026-01-07 01:22:33.476907 | 2026-01-07 01:22:33.477065 | TASK [Clean the cloud environment] 2026-01-07 01:22:34.686785 | orchestrator | 2026-01-07 01:22:34 - clean up servers 2026-01-07 01:22:35.170483 | orchestrator | 2026-01-07 01:22:35 - clean up keypairs 2026-01-07 01:22:35.191465 | orchestrator | 2026-01-07 01:22:35 - wait for servers to be gone 2026-01-07 01:22:35.236601 | orchestrator | 2026-01-07 01:22:35 - clean up ports 2026-01-07 01:22:35.323766 | orchestrator | 2026-01-07 01:22:35 - clean up volumes 2026-01-07 01:22:35.397852 | orchestrator | 2026-01-07 01:22:35 - disconnect routers 2026-01-07 01:22:35.420278 | orchestrator | 2026-01-07 01:22:35 - clean up subnets 2026-01-07 01:22:35.442692 | orchestrator | 2026-01-07 01:22:35 - clean up networks 2026-01-07 01:22:35.571758 | orchestrator | 2026-01-07 01:22:35 - clean up security groups 2026-01-07 01:22:35.604862 | orchestrator | 2026-01-07 01:22:35 - clean up floating ips 2026-01-07 01:22:35.631099 | orchestrator | 2026-01-07 01:22:35 - clean up routers 2026-01-07 01:22:36.017945 | orchestrator | ok: Runtime: 0:00:01.368209 2026-01-07 01:22:36.022134 | 2026-01-07 01:22:36.022297 | PLAY RECAP 2026-01-07 01:22:36.022401 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-07 01:22:36.022578 | 2026-01-07 01:22:36.170629 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-07 01:22:36.171726 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-07 01:22:36.967607 | 2026-01-07 01:22:36.967792 | PLAY [Base post-fetch] 2026-01-07 01:22:36.984985 | 2026-01-07 01:22:36.985165 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-07 01:22:37.041372 | orchestrator | skipping: Conditional result was False 2026-01-07 01:22:37.059555 | 2026-01-07 01:22:37.059870 | TASK [fetch-output : Set log path for single node] 2026-01-07 01:22:37.120708 | orchestrator | ok 2026-01-07 01:22:37.129937 | 2026-01-07 01:22:37.130104 | LOOP [fetch-output : Ensure local output dirs] 2026-01-07 01:22:37.636049 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/a69b794a49924d19914edb2910e3f0b3/work/logs" 2026-01-07 01:22:37.917843 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a69b794a49924d19914edb2910e3f0b3/work/artifacts" 2026-01-07 01:22:38.183088 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a69b794a49924d19914edb2910e3f0b3/work/docs" 2026-01-07 01:22:38.213802 | 2026-01-07 01:22:38.214008 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-07 01:22:39.230986 | orchestrator | changed: .d..t...... ./ 2026-01-07 01:22:39.231365 | orchestrator | changed: All items complete 2026-01-07 01:22:39.231451 | 2026-01-07 01:22:39.988086 | orchestrator | changed: .d..t...... ./ 2026-01-07 01:22:40.735318 | orchestrator | changed: .d..t...... ./ 2026-01-07 01:22:40.769400 | 2026-01-07 01:22:40.769628 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-07 01:22:40.813114 | orchestrator | skipping: Conditional result was False 2026-01-07 01:22:40.818849 | orchestrator | skipping: Conditional result was False 2026-01-07 01:22:40.843171 | 2026-01-07 01:22:40.843305 | PLAY RECAP 2026-01-07 01:22:40.843379 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-07 01:22:40.843439 | 2026-01-07 01:22:40.994171 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-07 01:22:40.998893 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-07 01:22:41.989918 | 2026-01-07 01:22:41.990099 | PLAY [Base post] 2026-01-07 01:22:42.006027 | 2026-01-07 01:22:42.006191 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-07 01:22:43.047840 | orchestrator | changed 2026-01-07 01:22:43.057235 | 2026-01-07 01:22:43.057388 | PLAY RECAP 2026-01-07 01:22:43.057495 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-07 01:22:43.057567 | 2026-01-07 01:22:43.189570 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-07 01:22:43.192212 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-07 01:22:44.024587 | 2026-01-07 01:22:44.024898 | PLAY [Base post-logs] 2026-01-07 01:22:44.036528 | 2026-01-07 01:22:44.036690 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-07 01:22:44.525056 | localhost | changed 2026-01-07 01:22:44.543364 | 2026-01-07 01:22:44.543596 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-07 01:22:44.573190 | localhost | ok 2026-01-07 01:22:44.580101 | 2026-01-07 01:22:44.580270 | TASK [Set zuul-log-path fact] 2026-01-07 01:22:44.610514 | localhost | ok 2026-01-07 01:22:44.626698 | 2026-01-07 01:22:44.626906 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-07 01:22:44.667025 | localhost | ok 2026-01-07 01:22:44.675880 | 2026-01-07 01:22:44.676220 | TASK [upload-logs : Create log directories] 2026-01-07 01:22:45.248170 | localhost | changed 2026-01-07 01:22:45.254634 | 2026-01-07 01:22:45.254805 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-07 01:22:45.778744 | localhost -> localhost | ok: Runtime: 0:00:00.007043 2026-01-07 01:22:45.783499 | 2026-01-07 01:22:45.783635 | TASK [upload-logs : Upload logs to log server] 2026-01-07 01:22:46.385158 | localhost | Output suppressed because no_log was given 2026-01-07 01:22:46.389880 | 2026-01-07 01:22:46.390058 | LOOP [upload-logs : Compress console log and json output] 2026-01-07 01:22:46.465444 | localhost | skipping: Conditional result was False 2026-01-07 01:22:46.470731 | localhost | skipping: Conditional result was False 2026-01-07 01:22:46.486409 | 2026-01-07 01:22:46.486672 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-07 01:22:46.537971 | localhost | skipping: Conditional result was False 2026-01-07 01:22:46.538781 | 2026-01-07 01:22:46.541904 | localhost | skipping: Conditional result was False 2026-01-07 01:22:46.549761 | 2026-01-07 01:22:46.549997 | LOOP [upload-logs : Upload console log and json output]